tags | displayed_sidebar | |||
---|---|---|---|---|
|
docsEnglish |
This roadmap provides a look into the proposed future of ScalarDB. The purpose of this roadmap is to provide visibility into what changes may be coming so that you can more closely follow progress, learn about key milestones, and give feedback during development. This roadmap will be updated as new versions of ScalarDB are released.
:::warning
During the course of development, this roadmap is subject to change based on user needs and feedback. Do not schedule your release plans according to the contents of this roadmap.
If you have a feature request or want to prioritize feature development, please create an issue in GitHub.
:::
- IBM Db2
- Users will be able to use IBM Db2 as an underlying database through ScalarDB Cluster.
- TiDB
- Users will be able to use TiDB as an underlying database through ScalarDB Cluster.
- Databricks
- Users will be able to use Databricks as an underlying database through ScalarDB Cluster and ScalarDB Analytics.
- Snowflake
- Users will be able to use Snowflake as an underlying database through ScalarDB Cluster and ScalarDB Analytics.
- Addition of decimal data types
- Users will be able to use decimal data types so that users can handle decimal numbers with high precision.
- Removal of extra-write strategy
- Users will no longer be able to use the extra-write strategy to make transactions serializable. Although ScalarDB currently provides two strategies (extra-read and extra-write strategies) to make transactions serializable, the extra-write strategy has several limitations. For example, users can't issue write and scan operations in the same transaction. Therefore, the strategy will be removed so that users don't need to worry about such limitations when creating applications.
- Better governance in ScalarDB Analytics
- Users will be able to be authenticated and authorized by using the ScalarDB Core features.
- Addition of read-committed isolation
- Users will be able to run transactions with a read-committed isolation to achieve better performance for applications that do not require strong correctness.
- One-phase commit optimization for a single relational database
- Users will be able to run a transaction more efficiently by using one-phase commit if the operations of the transaction are all applied to a single database or a single partition.
- Optimization for multiple write operations per database
- Users will be able to run transactions more efficiently with a batch preparation and commitment if there are multiple write operations for a database.
- Optimization for read-only transactions
- Users will be able to run transactions more efficiently by avoiding coordinator writes when committing transactions.
- Removal of WAL-interpreted views in ScalarDB Analytics
- Users will be able to read committed data by using ScalarDB Core instead of WAL-interpreted views, which will increase query performance.
- Container offering in Azure Marketplace for ScalarDB Cluster
- Users will be able to deploy ScalarDB Cluster by using the Azure container offering, which enables users to use a pay-as-you-go subscription model.
- Google Cloud Platform (GCP) support for ScalarDB Cluster
- Users will be able to deploy ScalarDB Cluster in Google Kubernetes Engine (GKE) in GCP.
- Container offering in Amazon Marketplace for ScalarDB Analytics
- Users will be able to deploy ScalarDB Analytics by using the container offering, which enables users to use a pay-as-you-go subscription model.
- Decoupled metadata management
- Users will be able to start using ScalarDB Cluster without migrating or changing the schemas of existing applications by managing the transaction metadata of ScalarDB in a separate location.
- Views
- Users will be able to define views so that they can manage multiple different databases in an easier and simplified way.
- Addition of SQL operations for aggregation
- Users will be able to issue aggregation operations in ScalarDB SQL.
- Elimination of out-of-memory errors due to large scans
- Users will be able to issue large scans without experiencing out-of-memory errors.
- Enabling of read operations during a paused duration
- Users will be able to issue read operations even during a paused duration so that users can still read data while taking backups.
- One-phase commit optimization
- Users will experience faster execution for simple transactions that write to a single partition. ScalarDB will omit the prepare-record and commit-state phases without sacrificing correctness if a transaction updates only one partition by exploiting the single-partition linearizable operations of the underlying databases.
- Semi-synchronous replication
- Users will be able to replicate the data of ScalarDB-based applications in a disaster-recoverable manner. For example, assume you provide a primary service in Tokyo and a standby service in Osaka. In case of catastrophic failure in Tokyo, you can switch the primary service to Osaka so that you can continue to provide the service without data loss and extended downtime.
- Native secondary index
- Users will be able to define flexible secondary indexes. The existing secondary index is limited because it is implemented based on the common capabilities of the supported databases' secondary indexes. Therefore, for example, you cannot define a multi-column index. The new secondary index will be created at the ScalarDB layer so that you can create more flexible indexes, like a multi-column index.
- Better catalog management
- Users will be able to manage a data catalog across diverse databases in a unified manner.
- Azure Blob Storage
- Users will be able to use Azure Blob Storage as an underlying database through ScalarDB Cluster.
- Amazon S3
- Users will be able to use Amazon S3 as an underlying database through ScalarDB Cluster.
- Google Cloud Storage
- Users will be able to use Google Cloud Storage as an underlying database through ScalarDB Cluster and ScalarDB Analytics.
- Reduction of storage space needed for managing ScalarDB metadata
- Users will likely use less storage space to run ScalarDB. ScalarDB will remove the before image of committed transactions after they are committed. However, whether or not those committed transactions will impact actual storage space depends on the underlying databases.
- Red Hat OpenShift support
- Users will be able to use Red Hat–certified Helm Charts for ScalarDB Cluster in OpenShift environments.
- Container offering in Google Cloud Marketplace
- Users will be able to deploy ScalarDB Cluster by using the Google Cloud container offering, which enables users to use a pay-as-you-go subscription model.