diff --git a/docs/.prettierrc b/docs/.prettierrc index e3d733c19b2e5..3c800898e375d 100644 --- a/docs/.prettierrc +++ b/docs/.prettierrc @@ -3,7 +3,7 @@ "tabWidth": 2, "useTabs": false, "semi": true, - "singleQuote": true, + "singleQuote": false, "arrowParens": "always", "trailingComma": "es5", "bracketSpacing": true, diff --git a/docs/content/Auth/Security-Context.mdx b/docs/content/Auth/Security-Context.mdx index e862a34f94b68..efac52f994fef 100644 --- a/docs/content/Auth/Security-Context.mdx +++ b/docs/content/Auth/Security-Context.mdx @@ -11,7 +11,7 @@ context claims to evaluate access control rules. Inbound JWTs are decoded and verified using industry-standard [JSON Web Key Sets (JWKS)][link-auth0-jwks]. For access control or authorization, Cube allows you to define granular access -control rules for every cube in your data schema. Cube uses both the request and +control rules for every cube in your data model. Cube uses both the request and security context claims in the JWT token to generate a SQL query, which includes row-level constraints from the access control rules. @@ -132,11 +132,11 @@ LIMIT 10000 In the example below `user_id`, `company_id`, `sub` and `iat` will be injected into the security context and will be accessible in both the [Security Context][ref-schema-sec-ctx] and [`COMPILE_CONTEXT`][ref-cubes-compile-ctx] -global variable in the Cube Data Schema. +global variable in the Cube data model. -`COMPILE_CONTEXT` is used by Cube at schema compilation time, which allows +`COMPILE_CONTEXT` is used by Cube at data model compilation time, which allows changing the underlying dataset completely; the Security Context is only used at query execution time, which simply filters the dataset with a `WHERE` clause. @@ -151,8 +151,8 @@ query execution time, which simply filters the dataset with a `WHERE` clause. } ``` -With the same JWT payload as before, we can modify schemas before they are -compiled. The following schema will ensure users only see results for their +With the same JWT payload as before, we can modify models before they are +compiled. The following cube will ensure users only see results for their `company_id` in a multi-tenant deployment: ```javascript diff --git a/docs/content/Caching/Getting-Started-Pre-Aggregations.mdx b/docs/content/Caching/Getting-Started-Pre-Aggregations.mdx index 8490a824c8722..845c37d730114 100644 --- a/docs/content/Caching/Getting-Started-Pre-Aggregations.mdx +++ b/docs/content/Caching/Getting-Started-Pre-Aggregations.mdx @@ -40,7 +40,7 @@ layer][ref-caching-preaggs-cubestore]. ## Pre-Aggregations without Time Dimension To illustrate pre-aggregations with an example, let's use a sample e-commerce -database. We have a schema representing all our `Orders`: +database. We have a data model representing all our `Orders`: ```javascript cube(`Orders`, { @@ -106,9 +106,9 @@ cube(`Orders`, { ## Pre-Aggregations with Time Dimension -Using the same schema as before, we are now finding that users frequently query -for the number of orders completed per day, and that this query is performing -poorly. This query might look something like: +Using the same data model as before, we are now finding that users frequently +query for the number of orders completed per day, and that this query is +performing poorly. This query might look something like: ```json { @@ -118,7 +118,7 @@ poorly. This query might look something like: ``` In order to improve the performance of this query, we can add another -pre-aggregation definition to the `Orders` schema: +pre-aggregation definition to the `Orders` cube: ```javascript cube(`Orders`, { @@ -245,7 +245,7 @@ fields and still get a correct result: | 2021-01-22 00:00:00.000000 | 13 | 150 | This means that `quantity` and `price` are both **additive measures**, and we -can represent them in the `LineItems` schema as follows: +can represent them in the `LineItems` cube as follows: ```javascript cube(`LineItems`, { @@ -340,7 +340,7 @@ $$ We can clearly see that `523` **does not** equal `762.204545454545455`, and we cannot treat the `profit_margin` column the same as we would any other additive measure. Armed with the above knowledge, we can add the `profit_margin` field to -our schema **as a [dimension][ref-schema-dims]**: +our cube **as a [dimension][ref-schema-dims]**: ```javascript cube(`LineItems`, { @@ -437,17 +437,15 @@ To recap what we've learnt so far: `count`, `sum`, `min`, `max` or `countDistinctApprox` Cube looks for matching pre-aggregations in the order they are defined in a -cube's schema file. Each defined pre-aggregation is then tested for a match +cube's data model file. Each defined pre-aggregation is then tested for a match based on the criteria in the flowchart below: -
+
Pre-Aggregation Selection Flowchart
@@ -470,7 +468,7 @@ Some extra considerations for pre-aggregation selection: `['2020-01-01T00:00:00.000', '2020-01-01T23:59:59.999']`. Date ranges are inclusive, and the minimum granularity is `second`. -- The order in which pre-aggregations are defined in schemas matter; the first +- The order in which pre-aggregations are defined in models matter; the first matching pre-aggregation for a query is the one that is used. Both the measures and dimensions of any cubes specified in the query are checked to find a matching `rollup`. diff --git a/docs/content/Caching/Overview.mdx b/docs/content/Caching/Overview.mdx index 01f5a4dffb5ca..faad5e3fce000 100644 --- a/docs/content/Caching/Overview.mdx +++ b/docs/content/Caching/Overview.mdx @@ -49,8 +49,8 @@ more about read-only support and pre-aggregation build strategies. -Pre-aggregations are defined in the data schema. You can learn more about -defining pre-aggregations in [schema reference][ref-schema-ref-preaggs]. +Pre-aggregations are defined in the data model. You can learn more about +defining pre-aggregations in [data modeling reference][ref-schema-ref-preaggs]. ```javascript cube(`Orders`, { @@ -142,10 +142,9 @@ The default values for `refreshKey` are - `every: '10 second'` for all other databases. +You can use a custom SQL query to check if a refresh is required by changing -the [`refreshKey`][ref-schema-ref-cube-refresh-key] property in a cube's Data -Schema. Often, a `MAX(updated_at_timestamp)` for OLTP data is a viable option, -or examining a metadata table for whatever system is managing the data to see -when it last ran. +the [`refreshKey`][ref-schema-ref-cube-refresh-key] property in a cube. Often, a +`MAX(updated_at_timestamp)` for OLTP data is a viable option, or examining a +metadata table for whatever system is managing the data to see when it last ran. ### <--{"id" : "In-memory Cache"}--> Disabling the cache diff --git a/docs/content/Caching/Using-Pre-Aggregations.mdx b/docs/content/Caching/Using-Pre-Aggregations.mdx index f3b13d61db3e8..92ae7f3883376 100644 --- a/docs/content/Caching/Using-Pre-Aggregations.mdx +++ b/docs/content/Caching/Using-Pre-Aggregations.mdx @@ -7,7 +7,8 @@ menuOrder: 3 Pre-aggregations is a powerful way to speed up your Cube queries. There are many configuration options to consider. Please make sure to also check [the -Pre-Aggregations reference in the data schema section][ref-schema-ref-preaggs]. +Pre-Aggregations reference in the data modeling +section][ref-schema-ref-preaggs]. ## Refresh Strategy diff --git a/docs/content/Configuration/Advanced/Multitenancy.mdx b/docs/content/Configuration/Advanced/Multitenancy.mdx index b0875731dd3cc..36d41ab35824b 100644 --- a/docs/content/Configuration/Advanced/Multitenancy.mdx +++ b/docs/content/Configuration/Advanced/Multitenancy.mdx @@ -6,7 +6,7 @@ subCategory: Advanced menuOrder: 3 --- -Cube supports multitenancy out of the box, both on database and data schema +Cube supports multitenancy out of the box, both on database and data model levels. Multiple drivers are also supported, meaning that you can have one customer’s data in MongoDB and others in Postgres with one Cube instance. @@ -34,7 +34,7 @@ combinations of these configuration options. ### <--{"id" : "Multitenancy"}--> Multitenancy vs Multiple Data Sources -In cases where your Cube schema is spread across multiple different data +In cases where your Cube data model is spread across multiple different data sources, consider using the [`dataSource` cube property][ref-cube-datasource] instead of multitenancy. Multitenancy is designed for cases where you need to serve different datasets for multiple users, or tenants which aren't related to @@ -169,7 +169,7 @@ cube(`Products`, { ### <--{"id" : "Multitenancy"}--> Running in Production Each unique id generated by `contextToAppId` or `contextToOrchestratorId` will -generate a dedicated set of resources, including schema compile cache, SQL +generate a dedicated set of resources, including data model compile cache, SQL compile cache, query queues, in-memory result caching, etc. Depending on your data model complexity and usage patterns, those resources can have a pretty sizable memory footprint ranging from single-digit MBs on the lower end and @@ -219,7 +219,7 @@ module.exports = { }; ``` -## Multiple DB Instances with Same Schema +## Multiple DB Instances with Same Data Model Let's consider an example where we store data for different users in different databases, but on the same Postgres host. The database name format is @@ -249,12 +249,12 @@ select the database, based on the `appId` and `userId`: The App ID (the result of [`contextToAppId`][ref-config-ctx-to-appid]) is used -as a caching key for various in-memory structures like schema compilation +as a caching key for various in-memory structures like data model compilation results, connection pool. The Orchestrator ID (the result of [`contextToOrchestratorId`][ref-config-ctx-to-orch-id]) is used as a caching key for database connections, execution queues and pre-aggregation table caches. Not -declaring these properties will result in unexpected caching issues such as -schema or data of one tenant being used for another. +declaring these properties will result in unexpected caching issues such as the +data model or data of one tenant being used for another. @@ -292,7 +292,7 @@ module.exports = { }; ``` -## Multiple Schema and Drivers +## Multiple Data Models and Drivers What if for application with ID 3, the data is stored not in Postgres, but in MongoDB? @@ -301,9 +301,9 @@ We can instruct Cube to connect to MongoDB in that case, instead of Postgres. To do this, we'll use the [`driverFactory`][ref-config-driverfactory] option to dynamically set database type. We will also need to modify our [`securityContext`][ref-config-security-ctx] to determine which tenant is -requesting data. Finally, we want to have separate data schemas for every +requesting data. Finally, we want to have separate data models for every application. We can use the [`repositoryFactory`][ref-config-repofactory] option -to dynamically set a repository with schema files depending on the `appId`: +to dynamically set a repository with data model files depending on the `appId`: **cube.js:** diff --git a/docs/content/Configuration/Downstream/Superset.mdx b/docs/content/Configuration/Downstream/Superset.mdx index 96d07014d32cd..cceadf81fbf28 100644 --- a/docs/content/Configuration/Downstream/Superset.mdx +++ b/docs/content/Configuration/Downstream/Superset.mdx @@ -69,7 +69,7 @@ a new database: Your cubes will be exposed as tables, where both your measures and dimensions are columns. -Let's use the following Cube data schema: +Let's use the following Cube data model: ```javascript cube(`Orders`, { @@ -124,7 +124,7 @@ a time grain of `month`. The `COUNT(*)` aggregate function is being mapped to a measure of type [count](/schema/reference/types-and-formats#measures-types-count) in Cube's -**Orders** schema file. +**Orders** data model file. ## Additional Configuration diff --git a/docs/content/Deployment/Cloud/Continuous-Deployment.mdx b/docs/content/Deployment/Cloud/Continuous-Deployment.mdx index 2c732f823ad8b..1be39559ad056 100644 --- a/docs/content/Deployment/Cloud/Continuous-Deployment.mdx +++ b/docs/content/Deployment/Cloud/Continuous-Deployment.mdx @@ -56,8 +56,11 @@ Cube Cloud will automatically deploy from the specified production branch -Enabling this option will cause the Schema page to display the last known state of a Git-based codebase (if available), instead of reflecting the latest modifications made. -It is important to note that the logic will still be updated in both the API and the Playground. +Enabling this option will cause the Data Model page to display the +last known state of a Git-based codebase (if available), instead of reflecting +the latest modifications made. It is important to note that the logic will still +be updated in both the API and the Playground. + You can use the CLI to set up continuous deployment for a Git repository. You @@ -65,7 +68,7 @@ can also use the CLI to manually deploy changes without continuous deployment. ### <--{"id" : "Deploy with CLI"}--> Manual Deploys -You can deploy your Cube project manually. This method uploads data schema and +You can deploy your Cube project manually. This method uploads data models and configuration files directly from your local project directory. You can obtain Cube Cloud deploy token from your deployment **Settings** page. diff --git a/docs/content/Deployment/Overview.mdx b/docs/content/Deployment/Overview.mdx index c1b8f17f03e9d..f257cd6c606ae 100644 --- a/docs/content/Deployment/Overview.mdx +++ b/docs/content/Deployment/Overview.mdx @@ -42,7 +42,7 @@ API instances. API instances and Refresh Workers can be configured via [environment variables][ref-config-env] or the [`cube.js` configuration file][ref-config-js]. -They also need access to the data schema files. Cube Store clusters can be +They also need access to the data model files. Cube Store clusters can be configured via environment variables. You can find an example Docker Compose configuration for a Cube deployment in @@ -57,21 +57,22 @@ requests between multiple API instances. The [Cube Docker image][dh-cubejs] is used for API Instance. -API instance needs to be configured via environment variables, cube.js file and -has access to the data schema files. +API instances can be configured via environment variables or the `cube.js` +configuration file, and **must** have access to the data model files (as +specified by [`schemaPath`][ref-conf-ref-schemapath]. ## Refresh Worker A Refresh Worker updates pre-aggregations and invalidates the in-memory cache in -the background. They also keep the refresh keys up-to-date for all defined -schemas and pre-aggregations. Please note that the in-memory cache is just -invalidated but not populated by Refresh Worker. In-memory cache is populated -lazily during querying. On the other hand, pre-aggregations are eagerly -populated and kept up-to-date by Refresh Worker. +the background. They also keep the refresh keys up-to-date for all data models +and pre-aggregations. Please note that the in-memory cache is just invalidated +but not populated by Refresh Worker. In-memory cache is populated lazily during +querying. On the other hand, pre-aggregations are eagerly populated and kept +up-to-date by Refresh Worker. -[Cube Docker image][dh-cubejs] can be used for creating Refresh Workers; to make -the service act as a Refresh Worker, `CUBEJS_REFRESH_WORKER=true` should be set -in the environment variables. +The [Cube Docker image][dh-cubejs] can be used for creating Refresh Workers; to +make the service act as a Refresh Worker, `CUBEJS_REFRESH_WORKER=true` should be +set in the environment variables. ## Cube Store @@ -275,6 +276,7 @@ guide][blog-migration-guide]. [ref-deploy-docker]: /deployment/platforms/docker [ref-config-env]: /reference/environment-variables [ref-config-js]: /config +[ref-conf-ref-schemapath]: /config#options-reference-schema-path [redis]: https://redis.io [ref-config-redis]: /reference/environment-variables#cubejs-redis-password [blog-details]: https://cube.dev/blog/how-you-win-by-using-cube-store-part-1 diff --git a/docs/content/Deployment/Production-Checklist.mdx b/docs/content/Deployment/Production-Checklist.mdx index b866f0e9d1c32..4c95d8ade4bd1 100644 --- a/docs/content/Deployment/Production-Checklist.mdx +++ b/docs/content/Deployment/Production-Checklist.mdx @@ -97,37 +97,45 @@ deployment's health and be alerted to any issues. ## Appropriate cluster sizing -There's no one-size-fits-all when it comes to sizing Cube cluster, and its resources. -Resources required by Cube depend a lot on the amount of traffic Cube needs to serve and the amount of data it needs to process. -The following sizing estimates are based on default settings and are very generic, which may not fit your Cube use case, so you should always tweak resources based on consumption patterns you see. +There's no one-size-fits-all when it comes to sizing a Cube cluster and its +resources. Resources required by Cube significantly depend on the amount of +traffic Cube needs to serve and the amount of data it needs to process. The +following sizing estimates are based on default settings and are very generic, +which may not fit your Cube use case, so you should always tweak resources based +on consumption patterns you see. ### <--{"id" : "Appropriate cluster sizing"}--> Memory and CPU -Each Cube cluster should contain at least 2 Cube API instances. -Every Cube API instance should have at least 3GB of RAM and 2 CPU cores allocated for it. +Each Cube cluster should contain at least 2 Cube API instances. Every Cube API +instance should have at least 3GB of RAM and 2 CPU cores allocated for it. -Refresh workers tend to be much more CPU and memory intensive, so at least 6GB of RAM is recommended. -Please note that to take advantage of all available RAM, the Node.js heap size should be adjusted accordingly -by using the [`--max-old-space-size` option][node-heap-size]: +Refresh workers tend to be much more CPU and memory intensive, so at least 6GB +of RAM is recommended. Please note that to take advantage of all available RAM, +the Node.js heap size should be adjusted accordingly by using the +[`--max-old-space-size` option][node-heap-size]: ```sh NODE_OPTIONS="--max-old-space-size=6144" ``` -[node-heap-size]: https://nodejs.org/api/cli.html#--max-old-space-sizesize-in-megabytes +[node-heap-size]: + https://nodejs.org/api/cli.html#--max-old-space-sizesize-in-megabytes -The Cube Store router node should have at least 6GB of RAM and 4 CPU cores allocated for it. -Every Cube Store worker node should have at least 8GB of RAM and 4 CPU cores allocated for it. -The Cube Store cluster should have at least two worker nodes. +The Cube Store router node should have at least 6GB of RAM and 4 CPU cores +allocated for it. Every Cube Store worker node should have at least 8GB of RAM +and 4 CPU cores allocated for it. The Cube Store cluster should have at least +two worker nodes. ### <--{"id" : "Appropriate cluster sizing"}--> RPS and data volume -Depending on schema size, every Core Cube API instance can serve 1 to 10 requests per second. -Every Core Cube Store router node can serve 50-100 queries per second. -As a rule of thumb, you should provision 1 Cube Store worker node per one Cube Store partition or 1M of rows scanned in a query. -For example if your queries scan 16M of rows per query, you should have at least 16 Cube Store worker nodes provisioned. -`EXPLAIN ANALYZE` can be used to see partitions involved in a Cube Store query. -Cube Cloud ballpark performance numbers can differ as it has different Cube runtime. +Depending on data model size, every Core Cube API instance can serve 1 to 10 +requests per second. Every Core Cube Store router node can serve 50-100 queries +per second. As a rule of thumb, you should provision 1 Cube Store worker node +per one Cube Store partition or 1M of rows scanned in a query. For example if +your queries scan 16M of rows per query, you should have at least 16 Cube Store +worker nodes provisioned. `EXPLAIN ANALYZE` can be used to see partitions +involved in a Cube Store query. Cube Cloud ballpark performance numbers can +differ as it has different Cube runtime. [blog-migrate-to-cube-cloud]: https://cube.dev/blog/migrating-from-self-hosted-to-cube-cloud/ diff --git a/docs/content/Examples-Tutorials-Recipes/Examples.mdx b/docs/content/Examples-Tutorials-Recipes/Examples.mdx index 8b967ea40416b..3a768105ff28f 100644 --- a/docs/content/Examples-Tutorials-Recipes/Examples.mdx +++ b/docs/content/Examples-Tutorials-Recipes/Examples.mdx @@ -41,13 +41,13 @@ The following tutorials cover advanced concepts of Cube: Learn more about prominent features of Cube: -| Feature | Story | Demo | -| :-------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------ | -| [Drill downs](https://cube.dev/docs/schema/fundamentals/additional-concepts#drilldowns) | [Introducing a drill down table API](https://cube.dev/blog/introducing-a-drill-down-table-api-in-cubejs/) | [Demo](https://drill-downs-demo.cube.dev) | -| [Compare date range](https://cube.dev/docs/query-format#time-dimensions-format) | [Comparing data over different time periods](https://cube.dev/blog/comparing-data-over-different-time-periods/) | [Demo](https://compare-date-range-demo.cube.dev) | -| [Data blending](https://cube.dev/docs/recipes/data-blending) | [Introducing data blending API](https://cube.dev/blog/introducing-data-blending-api/) | [Demo](https://data-blending-demo.cube.dev) | -| [Real-time data fetch](https://cube.dev/docs/real-time-data-fetch) | [Real-time dashboard guide](https://real-time-dashboard.cube.dev) | [Demo](https://real-time-dashboard-demo.cube.dev) | -| [Dynamic schema creation](https://cube.dev/docs/dynamic-schema-creation) | [Using asyncModule to generate schemas](https://github.com/cube-js/cube/tree/master/examples/async-module-simple) | — | +| Feature | Story | Demo | +| :-------------------------------------------------------------------------------------- | :---------------------------------------------------------------------------------------------------------------- | :------------------------------------------------ | +| [Drill downs](https://cube.dev/docs/schema/fundamentals/additional-concepts#drilldowns) | [Introducing a drill down table API](https://cube.dev/blog/introducing-a-drill-down-table-api-in-cubejs/) | [Demo](https://drill-downs-demo.cube.dev) | +| [Compare date range](https://cube.dev/docs/query-format#time-dimensions-format) | [Comparing data over different time periods](https://cube.dev/blog/comparing-data-over-different-time-periods/) | [Demo](https://compare-date-range-demo.cube.dev) | +| [Data blending](https://cube.dev/docs/recipes/data-blending) | [Introducing data blending API](https://cube.dev/blog/introducing-data-blending-api/) | [Demo](https://data-blending-demo.cube.dev) | +| [Real-time data fetch](https://cube.dev/docs/real-time-data-fetch) | [Real-time dashboard guide](https://real-time-dashboard.cube.dev) | [Demo](https://real-time-dashboard-demo.cube.dev) | +| [Dynamic data model](https://cube.dev/docs/dynamic-schema-creation) | [Using asyncModule to generate schemas](https://github.com/cube-js/cube/tree/master/examples/async-module-simple) | — | | [Authentication](https://cube.dev/docs/security#using-json-web-key-sets-jwks) | [Auth0 integration](https://github.com/cube-js/cube/tree/master/examples/auth0) | — | | [Authentication](https://cube.dev/docs/security#using-json-web-key-sets-jwks) | [AWS Cognito integration](https://github.com/cube-js/cube/tree/master/examples/cognito) | — | diff --git a/docs/content/Examples-Tutorials-Recipes/Recipes.mdx b/docs/content/Examples-Tutorials-Recipes/Recipes.mdx index d782c90945335..f489da4423add 100644 --- a/docs/content/Examples-Tutorials-Recipes/Recipes.mdx +++ b/docs/content/Examples-Tutorials-Recipes/Recipes.mdx @@ -31,12 +31,12 @@ These recipes will show you the best practices of using Cube. - [Using SSL connections to a data source](/recipes/enable-ssl-connections-to-database) - [Joining data from multiple data sources](/recipes/joining-multiple-data-sources) -### <--{"id" : "Recipes"}--> Data schema +### <--{"id" : "Recipes"}--> Data modeling - [Calculating average and percentiles](https://cube.dev/docs/recipes/percentiles) - [Implementing data snapshots](/recipes/snapshots) - [Implementing Entity-Attribute-Value model](/recipes/entity-attribute-value) -- [Using different schemas for tenants](/recipes/using-different-schemas-for-tenants) +- [Using different data models for tenants](/recipes/using-different-schemas-for-tenants) - [Using dynamic measures](/recipes/referencing-dynamic-measures) - [Using dynamic union tables](/recipes/dynamically-union-tables) diff --git a/docs/content/Examples-Tutorials-Recipes/Recipes/Auth/AWS-Cognito.mdx b/docs/content/Examples-Tutorials-Recipes/Recipes/Auth/AWS-Cognito.mdx index 2c3675e8c70a3..e6d0d729e9c6a 100644 --- a/docs/content/Examples-Tutorials-Recipes/Recipes/Auth/AWS-Cognito.mdx +++ b/docs/content/Examples-Tutorials-Recipes/Recipes/Auth/AWS-Cognito.mdx @@ -183,8 +183,8 @@ Save button. />
-Close the popup and use the Developer Playground to make a request. Any schemas -using the [Security Context][ref-sec-ctx] should now work as expected. +Close the popup and use the Developer Playground to make a request. Any data +models using the [Security Context][ref-sec-ctx] should now work as expected. [link-aws-cognito-hosted-ui]: https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-app-integration.html#cognito-user-pools-create-an-app-integration diff --git a/docs/content/Examples-Tutorials-Recipes/Recipes/Auth/Auth0-Guide.mdx b/docs/content/Examples-Tutorials-Recipes/Recipes/Auth/Auth0-Guide.mdx index 5d13f3c3a12bd..7e1223d2ac259 100644 --- a/docs/content/Examples-Tutorials-Recipes/Recipes/Auth/Auth0-Guide.mdx +++ b/docs/content/Examples-Tutorials-Recipes/Recipes/Auth/Auth0-Guide.mdx @@ -222,8 +222,8 @@ button. /> -Close the popup and use the Developer Playground to make a request. Any schemas -using the [Security Context][ref-sec-ctx] should now work as expected. +Close the popup and use the Developer Playground to make a request. Any data +models using the [Security Context][ref-sec-ctx] should now work as expected. ## Example diff --git a/docs/content/Examples-Tutorials-Recipes/Recipes/active-users.mdx b/docs/content/Examples-Tutorials-Recipes/Recipes/active-users.mdx index 08feac532539d..e84977e6977f3 100644 --- a/docs/content/Examples-Tutorials-Recipes/Recipes/active-users.mdx +++ b/docs/content/Examples-Tutorials-Recipes/Recipes/active-users.mdx @@ -13,7 +13,7 @@ redirect_from: We want to know the customer engagement of our store. To do this, we need to use an [Active Users metric](https://en.wikipedia.org/wiki/Active_users). -## Data schema +## Data modeling Daily, weekly, and monthly active users are commonly referred to as DAU, WAU, MAU. To get these metrics, we need to use a rolling time frame to calculate a diff --git a/docs/content/Examples-Tutorials-Recipes/Recipes/column-based-access.mdx b/docs/content/Examples-Tutorials-Recipes/Recipes/column-based-access.mdx index f3007302c6ed4..e830614deb7df 100644 --- a/docs/content/Examples-Tutorials-Recipes/Recipes/column-based-access.mdx +++ b/docs/content/Examples-Tutorials-Recipes/Recipes/column-based-access.mdx @@ -12,7 +12,7 @@ We want to manage user access to different data depending on a database relationship. In the recipe below, we will manage supplier access to their products. A supplier can't see other supplier's products. -## Data schema +## Data modeling To implement column-based access, we will use supplier's email from a [JSON Web Token](https://cube.dev/docs/security), and the diff --git a/docs/content/Examples-Tutorials-Recipes/Recipes/controlling-access-to-cubes-and-views.mdx b/docs/content/Examples-Tutorials-Recipes/Recipes/controlling-access-to-cubes-and-views.mdx index 127f39645c6b7..3c62bbec915b8 100644 --- a/docs/content/Examples-Tutorials-Recipes/Recipes/controlling-access-to-cubes-and-views.mdx +++ b/docs/content/Examples-Tutorials-Recipes/Recipes/controlling-access-to-cubes-and-views.mdx @@ -28,7 +28,7 @@ module.exports = { }; ``` -## Data schema +## Data modeling ```javascript // Orders.js diff --git a/docs/content/Examples-Tutorials-Recipes/Recipes/dynamic-union-tables.mdx b/docs/content/Examples-Tutorials-Recipes/Recipes/dynamic-union-tables.mdx index 8117619128dae..effec32eb3029 100644 --- a/docs/content/Examples-Tutorials-Recipes/Recipes/dynamic-union-tables.mdx +++ b/docs/content/Examples-Tutorials-Recipes/Recipes/dynamic-union-tables.mdx @@ -2,7 +2,7 @@ title: Using Dynamic Union Tables permalink: /recipes/dynamically-union-tables category: Examples & Tutorials -subCategory: Data schema +subCategory: Data modeling menuOrder: 4 redirect_from: - /dynamically-union-tables diff --git a/docs/content/Examples-Tutorials-Recipes/Recipes/entity-attribute-value.mdx b/docs/content/Examples-Tutorials-Recipes/Recipes/entity-attribute-value.mdx index 364d5b7413e88..08b9fe0e34984 100644 --- a/docs/content/Examples-Tutorials-Recipes/Recipes/entity-attribute-value.mdx +++ b/docs/content/Examples-Tutorials-Recipes/Recipes/entity-attribute-value.mdx @@ -2,7 +2,7 @@ title: Implementing Entity-Attribute-Value Model (EAV) permalink: /recipes/entity-attribute-value category: Examples & Tutorials -subCategory: Data schema +subCategory: Data modeling menuOrder: 4 --- @@ -16,7 +16,7 @@ same set of associated attributes, thus making the entity-attribute-value relation a sparse matrix. In the cube, we'd like every attribute to be modeled as a dimension. -## Data schema +## Data modeling Let's explore the `Users` cube that contains the entities: @@ -95,7 +95,7 @@ their orders in any of these statuses. In terms of the EAV model: Let's explore some possible ways to model that. -### <--{"id" : "Data schema"}--> Static attributes +### <--{"id" : "Data modeling"}--> Static attributes We already know that the following statuses are present in the dataset: `completed`, `processing`, and `shipped`. Let's assume this set of statuses is @@ -179,7 +179,7 @@ The drawback is that when the set of statuses changes, we'll need to amend the cube definition in several places: update selected values and joins in SQL as well as update the dimensions. Let's see how to work around that. -### <--{"id" : "Data schema"}--> Static attributes, DRY version +### <--{"id" : "Data modeling"}--> Static attributes, DRY version We can embrace the [Don't Repeat Yourself](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself) @@ -240,7 +240,7 @@ The new `UsersStatuses_DRY` cube is functionally identical to the data. However, there's still a static list of statuses present in the cube's source code. Let's work around that next. -### <--{"id" : "Data schema"}--> Dynamic attributes +### <--{"id" : "Data modeling"}--> Dynamic attributes We can eliminate the list of statuses from the cube's code by loading this list from an external source, e.g., the data source. Here's the code from the @@ -276,7 +276,7 @@ exports.fetchStatuses = async () => { In the cube file, we will use the `fetchStatuses` function to load the list of statuses. We will also wrap the cube definition with the `asyncModule` built-in -function that allows the data schema to be created +function that allows the data model to be created [dynamically](https://cube.dev/docs/schema/advanced/dynamic-schema-creation). ```javascript diff --git a/docs/content/Examples-Tutorials-Recipes/Recipes/event-analytics.mdx b/docs/content/Examples-Tutorials-Recipes/Recipes/event-analytics.mdx index 7e06ef77b93a7..c3a69d9f692e1 100644 --- a/docs/content/Examples-Tutorials-Recipes/Recipes/event-analytics.mdx +++ b/docs/content/Examples-Tutorials-Recipes/Recipes/event-analytics.mdx @@ -22,11 +22,11 @@ This tutorial walks through how to transform raw event data into sessions. Many they work as a “black box.” It doesn’t give the user either insight into or control how these sessions defined and work. -With Cube SQL-based sessions schema, you’ll have full control over how these +With Cube SQL-based sessions data model, you’ll have full control over how these metrics are defined. It will give you great flexibility when designing sessions and events to your unique business use case. -A few question we’ll answer with our sessions schema: +A few question we’ll answer with our sessions data model: - How do we measure session duration? - What is our bounce rate? @@ -430,7 +430,7 @@ cube('Sessions', { }); ``` -That was our final step in building a foundation for sessions schema. +That was our final step in building a foundation for a sessions data model. Congratulations on making it here! Now we’re ready to add some advanced metrics on top of it. diff --git a/docs/content/Examples-Tutorials-Recipes/Recipes/getting-unique-values-for-a-field.mdx b/docs/content/Examples-Tutorials-Recipes/Recipes/getting-unique-values-for-a-field.mdx index 17ae7c325ab51..2bbd181fb0589 100644 --- a/docs/content/Examples-Tutorials-Recipes/Recipes/getting-unique-values-for-a-field.mdx +++ b/docs/content/Examples-Tutorials-Recipes/Recipes/getting-unique-values-for-a-field.mdx @@ -13,7 +13,7 @@ them by city. To do so, we need to display all unique values for cities in the dropdown. In the recipe below, we'll learn how to get unique values for [dimensions](https://cube.dev/docs/schema/reference/dimensions). -## Data schema +## Data modeling To filter users by city, we need to define the appropriate dimension: diff --git a/docs/content/Examples-Tutorials-Recipes/Recipes/incrementally-building-pre-aggregations-for-a-date-range.mdx b/docs/content/Examples-Tutorials-Recipes/Recipes/incrementally-building-pre-aggregations-for-a-date-range.mdx index b260616ea6e58..16dc70f3afe42 100644 --- a/docs/content/Examples-Tutorials-Recipes/Recipes/incrementally-building-pre-aggregations-for-a-date-range.mdx +++ b/docs/content/Examples-Tutorials-Recipes/Recipes/incrementally-building-pre-aggregations-for-a-date-range.mdx @@ -19,7 +19,7 @@ This is most beneficial when using data warehouses with partitioning support (such as [AWS Athena][self-config-aws-athena] and [Google BigQuery][self-config-google-bigquery]). -## Data schema +## Data modeling Let's use an example of a cube with a nested SQL query: diff --git a/docs/content/Examples-Tutorials-Recipes/Recipes/joining-multiple-data-sources.mdx b/docs/content/Examples-Tutorials-Recipes/Recipes/joining-multiple-data-sources.mdx index b08e5da981a5a..f6d7f277ae64f 100644 --- a/docs/content/Examples-Tutorials-Recipes/Recipes/joining-multiple-data-sources.mdx +++ b/docs/content/Examples-Tutorials-Recipes/Recipes/joining-multiple-data-sources.mdx @@ -49,7 +49,7 @@ module.exports = { }; ``` -## Data schema +## Data modeling First, we'll define [rollup](https://cube.dev/docs/schema/reference/pre-aggregations#parameters-type-rollup) diff --git a/docs/content/Examples-Tutorials-Recipes/Recipes/multiple-sources-same-schema.mdx b/docs/content/Examples-Tutorials-Recipes/Recipes/multiple-sources-same-schema.mdx index 1c1830921d0b7..adbe0f43defa9 100644 --- a/docs/content/Examples-Tutorials-Recipes/Recipes/multiple-sources-same-schema.mdx +++ b/docs/content/Examples-Tutorials-Recipes/Recipes/multiple-sources-same-schema.mdx @@ -10,14 +10,14 @@ menuOrder: 3 We need to access the data from different data sources for different tenants. For example, we are the platform for the online website builder, and each client -can only view their data. The same data schema is used for all clients. +can only view their data. The same data model is used for all clients. ## Configuration Each client has its own database. In this recipe, the `Mango Inc` tenant keeps its data in the remote `ecom` database while the `Avocado Inc` tenant works with the local database (bootstrapped in the `docker-compose.yml` file) which has the -same schema. +same data model. To enable multitenancy, use the [`contextToAppId`](https://cube.dev/docs/config#options-reference-context-to-app-id) @@ -69,8 +69,8 @@ module.exports = { ## Query To get users for different tenants, we will send two identical requests with -different JWTs. Also we send a query with unknown tenant to show that he cannot -access to the data schema of other tenants. +different JWTs. Also, we send a query with unknown tenant to show that he cannot +access to the data model of other tenants. ```javascript // JWT payload for "Avocado Inc" diff --git a/docs/content/Examples-Tutorials-Recipes/Recipes/non-additivity.mdx b/docs/content/Examples-Tutorials-Recipes/Recipes/non-additivity.mdx index 9322ca718f6a8..87bf2e895096b 100644 --- a/docs/content/Examples-Tutorials-Recipes/Recipes/non-additivity.mdx +++ b/docs/content/Examples-Tutorials-Recipes/Recipes/non-additivity.mdx @@ -20,7 +20,7 @@ Pre-aggregations with such measures are less likely to be [selected](https://cube.dev/docs/caching/pre-aggregations/getting-started#ensuring-pre-aggregations-are-targeted-by-queries-selecting-the-pre-aggregation) to accelerate a query. However, there are a few ways to work around that. -## Data schema +## Data modeling Let's explore the `Users` cube that contains various measures describing users' age: @@ -94,7 +94,7 @@ accelerated: Let's explore some possible workarounds. -### <--{"id" : "Data schema"}--> Replacing with approximate additive measures +### <--{"id" : "Data modeling"}--> Replacing with approximate additive measures Often, non-additive `countDistinct` measures can be changed to have the [`countDistinctApprox` type](https://cube.dev/docs/schema/reference/types-and-formats#measures-types-count-distinct-approx) @@ -117,7 +117,7 @@ For example, the `distinctAges` measure can be rewritten as follows: }, ``` -### <--{"id" : "Data schema"}--> Decomposing into a formula with additive measures +### <--{"id" : "Data modeling"}--> Decomposing into a formula with additive measures Non-additive `avg` measures can be rewritten as [calculated measures](https://cube.dev/docs/schema/reference/measures#calculated-measures) @@ -142,7 +142,7 @@ For example, the `avgAge` measure can be rewritten as follows: }, ``` -### <--{"id" : "Data schema"}--> Providing multiple pre-aggregations +### <--{"id" : "Data modeling"}--> Providing multiple pre-aggregations If the two workarounds described above don't apply to your use case, feel free to create additional pre-aggregations with definitions that fully match your diff --git a/docs/content/Examples-Tutorials-Recipes/Recipes/pagination.mdx b/docs/content/Examples-Tutorials-Recipes/Recipes/pagination.mdx index b1fc7c7ca942e..3a68f2bb1651d 100644 --- a/docs/content/Examples-Tutorials-Recipes/Recipes/pagination.mdx +++ b/docs/content/Examples-Tutorials-Recipes/Recipes/pagination.mdx @@ -13,9 +13,9 @@ easier to digest and to improve the performance of the query, we'll use pagination. With the recipe below, we'll get the orders list sorted by the order number. Every page will have 5 orders. -## Data schema +## Data modeling -We have the following data schema. +We have the following data model: ```javascript cube(`Orders`, { diff --git a/docs/content/Examples-Tutorials-Recipes/Recipes/passing-dynamic-parameters-in-a-query.mdx b/docs/content/Examples-Tutorials-Recipes/Recipes/passing-dynamic-parameters-in-a-query.mdx index e65e5523c7a44..6005bf1712640 100644 --- a/docs/content/Examples-Tutorials-Recipes/Recipes/passing-dynamic-parameters-in-a-query.mdx +++ b/docs/content/Examples-Tutorials-Recipes/Recipes/passing-dynamic-parameters-in-a-query.mdx @@ -2,7 +2,7 @@ title: Passing Dynamic Parameters in a Query permalink: /recipes/passing-dynamic-parameters-in-a-query category: Examples & Tutorials -subCategory: Data schema +subCategory: Data modeling menuOrder: 4 --- @@ -14,7 +14,7 @@ filter. The trick is to get the value of the city from the user and use it in the calculation. In the recipe below, we can learn how to join the data table with itself and reshape the dataset! -## Data schema +## Data modeling Let's explore the `Users` cube data that contains various information about users, including city and gender: diff --git a/docs/content/Examples-Tutorials-Recipes/Recipes/percentiles.mdx b/docs/content/Examples-Tutorials-Recipes/Recipes/percentiles.mdx index 6a6878e5f02da..53d9ecc822224 100644 --- a/docs/content/Examples-Tutorials-Recipes/Recipes/percentiles.mdx +++ b/docs/content/Examples-Tutorials-Recipes/Recipes/percentiles.mdx @@ -2,7 +2,7 @@ title: Calculating Average and Percentiles permalink: /recipes/percentiles category: Examples & Tutorials -subCategory: Data schema +subCategory: Data modeling menuOrder: 4 --- @@ -24,7 +24,7 @@ as the 50th percentile (`n = 0.5`), and it can be casually thought of as "the middle" value. 2.5 and 0 are the medians of `(1, 2, 3, 4)` and `(0, 0, 0, 10)`, respectively. -## Data schema +## Data modeling Let's explore the data in the `Users` cube that contains various demographic information about users, including their age: diff --git a/docs/content/Examples-Tutorials-Recipes/Recipes/schema-generation.mdx b/docs/content/Examples-Tutorials-Recipes/Recipes/schema-generation.mdx index a791f579d3b6f..e5dd32af6ed32 100644 --- a/docs/content/Examples-Tutorials-Recipes/Recipes/schema-generation.mdx +++ b/docs/content/Examples-Tutorials-Recipes/Recipes/schema-generation.mdx @@ -8,9 +8,10 @@ redirect_from: - /schema-generation --- -Cube Schema is Javascript code, which means the full power of this language can -be used to configure your schema definitions. In this guide we generate several -measure definitions based on an array of strings. +Cube supports two ways to define data model files: with YAML or JavaScript +syntax. If you opt for JavaScript syntax, you can use the full power of this +programming language to configure your data model. In this guide we generate +several measure definitions based on an array of strings. One example, based on a real world scenario, is when you have a single `events` table containing an `event_type` and `user_id` column. Based on this table you @@ -59,5 +60,5 @@ code. This configuration can be reused using Please refer to [asyncModule](/schema/reference/execution-environment#async-module) -documentation to learn how to use databases and other data sources for schema -generation. +documentation to learn how to use databases and other data sources for data +model generation. diff --git a/docs/content/Examples-Tutorials-Recipes/Recipes/snapshots.mdx b/docs/content/Examples-Tutorials-Recipes/Recipes/snapshots.mdx index 5967aa1d1ac33..26aa7c308e5d9 100644 --- a/docs/content/Examples-Tutorials-Recipes/Recipes/snapshots.mdx +++ b/docs/content/Examples-Tutorials-Recipes/Recipes/snapshots.mdx @@ -2,7 +2,7 @@ title: Implementing Data Snapshots permalink: /recipes/snapshots category: Examples & Tutorials -subCategory: Data schema +subCategory: Data modeling menuOrder: 4 --- @@ -17,12 +17,12 @@ date for a cube with `Product Id`, `Status`, and `Changed At` dimensions. We can consider the status property to be a [slowly changing dimension](https://en.wikipedia.org/wiki/Slowly_changing_dimension) -(SCD) of type 2. Modeling data schemas with slowly changing dimensions is an +(SCD) of type 2. Modeling data with slowly changing dimensions is an essential part of the data engineering skillset.
-## Data schema +## Data modeling Let's explore the `Statuses` cube that contains data like this: diff --git a/docs/content/Examples-Tutorials-Recipes/Recipes/using-different-schemas-for-tenants.mdx b/docs/content/Examples-Tutorials-Recipes/Recipes/using-different-schemas-for-tenants.mdx index c7ec3267a6caf..72fb99f6cad75 100644 --- a/docs/content/Examples-Tutorials-Recipes/Recipes/using-different-schemas-for-tenants.mdx +++ b/docs/content/Examples-Tutorials-Recipes/Recipes/using-different-schemas-for-tenants.mdx @@ -1,5 +1,5 @@ --- -title: Using Different Schemas for Tenants +title: Using Different Data Models for Tenants permalink: /recipes/using-different-schemas-for-tenants category: Examples & Tutorials subCategory: Access control @@ -8,8 +8,8 @@ menuOrder: 2 ## Use case -We want to provide different data schemas to different tenants. In the recipe -below, we'll learn how to switch between multiple data schemas based on the +We want to provide different data models to different tenants. In the recipe +below, we'll learn how to switch between multiple data models based on the tenant. ## Configuration @@ -26,7 +26,7 @@ model/ └── Products.js ``` -Let's configure Cube to use a specific data schema path for each tenant. We'll +Let's configure Cube to use a specific data model path for each tenant. We'll pass the tenant name as a part of [`securityContext`](https://cube.dev/docs/security/context#top) into the [`repositoryFactory`](https://cube.dev/docs/config#repository-factory) function. @@ -36,7 +36,7 @@ We'll also need to override the control how the data model compilation result is cached and provide the tenant names via the [`scheduledRefreshContexts`](https://cube.dev/docs/config#scheduled-refresh-contexts) -function so a refresh worker can find all existing schemas and build +function so a refresh worker can find all existing data models and build pre-aggregations for them, if needed. Our `cube.js` file will look like this: @@ -58,7 +58,7 @@ module.exports = { }; ``` -## Data schema +## Data modeling In this example, we'd like to get products with odd `id` values for the `avocado` tenant and with even `id` values the `mango` tenant: diff --git a/docs/content/Examples-Tutorials-Recipes/Recipes/using-dynamic-measures.mdx b/docs/content/Examples-Tutorials-Recipes/Recipes/using-dynamic-measures.mdx index 01ed890a94519..619517ed89a5a 100644 --- a/docs/content/Examples-Tutorials-Recipes/Recipes/using-dynamic-measures.mdx +++ b/docs/content/Examples-Tutorials-Recipes/Recipes/using-dynamic-measures.mdx @@ -2,7 +2,7 @@ title: Using Dynamic Measures permalink: /recipes/referencing-dynamic-measures category: Examples & Tutorials -subCategory: Data schema +subCategory: Data modeling menuOrder: 4 --- @@ -12,11 +12,11 @@ We want to understand the distribution of orders by their statuses. Let's imagine that new order statuses can be added in the future, or we get a list of statuses from an external API. To calculate the orders percentage distribution, we need to create several [measures](/schema/fundamentals/concepts#measures) -that refer to each other. But we don't want to manually change the schema for +that refer to each other. But we don't want to manually change the data model for each new status. To solve this, we will create a [schema dynamically](/schema/advanced/dynamic-schema-creation). -## Data schema +## Data modeling To calculate the number of orders as a percentage, we need to know the total number of orders and the number of orders with the desired status. We'll create diff --git a/docs/content/Examples-Tutorials-Recipes/Recipes/using-originalsql-and-rollups-effectively.mdx b/docs/content/Examples-Tutorials-Recipes/Recipes/using-originalsql-and-rollups-effectively.mdx index 3e7610910ceaa..8b66da0c46d7f 100644 --- a/docs/content/Examples-Tutorials-Recipes/Recipes/using-originalsql-and-rollups-effectively.mdx +++ b/docs/content/Examples-Tutorials-Recipes/Recipes/using-originalsql-and-rollups-effectively.mdx @@ -51,7 +51,7 @@ cube('Orders', { ## Result -With the above schema, the `main` pre-aggregation is built from the `base` +With the above data model, the `main` pre-aggregation is built from the `base` pre-aggregation. [ref-schema-ref-preaggs-type-origsql]: diff --git a/docs/content/Examples-Tutorials-Recipes/Refreshing-select-partitions.mdx b/docs/content/Examples-Tutorials-Recipes/Refreshing-select-partitions.mdx index 681cde4b9932f..ba01b22149353 100644 --- a/docs/content/Examples-Tutorials-Recipes/Refreshing-select-partitions.mdx +++ b/docs/content/Examples-Tutorials-Recipes/Refreshing-select-partitions.mdx @@ -21,7 +21,7 @@ together with the [`FITER_PARAMS`](https://cube.dev/docs/schema/reference/cube#filter-params) for partition separately. -## Data schema +## Data modeling Let's explore the `Orders` cube data that contains various information about orders, including number and status: diff --git a/docs/content/FAQs/Tips-and-Tricks.mdx b/docs/content/FAQs/Tips-and-Tricks.mdx index 1af567f8d7ea4..33697d7762aad 100644 --- a/docs/content/FAQs/Tips-and-Tricks.mdx +++ b/docs/content/FAQs/Tips-and-Tricks.mdx @@ -12,10 +12,11 @@ To use your second database schema, update the `CUBE_DB_NAME` environment variable in **Settings > Configuration**. Change `CUBE_DB_NAME` to the name of your second schema. -This will trigger a new build. Once it's completed click on the Schema tab on -the left hand side navigation, and then in the upper-right corner, click the -three-dot menu -> Generate Schema. You should be able to see the name of the -second schema from your database and generate new models. +This will trigger a new build. Once it's completed click on Data +Model in the left hand side navigation, and then in the upper-right +corner, click the three-dot menu and select Generate Data Model. You +should be able to see the name of the second schema from your database and +generate new models. ## Can I track my customers' query usage? @@ -37,8 +38,8 @@ the data rather than a single customer's. To give yourself higher permissions through the SQL API, you could create an exception for the usual Row-Level Security checks. -In the following schema, we have created some example Row-Level Security rules -and an exception for querying data via the SQL API. +In the following data models, we have created some example Row-Level Security +rules and an exception for querying data via the SQL API. ### Defining basic RLS diff --git a/docs/content/Getting-Started/Cloud/02-Create-a-deployment.mdx b/docs/content/Getting-Started/Cloud/02-Create-a-deployment.mdx index 23b9a600235ce..2608b9537c3cd 100644 --- a/docs/content/Getting-Started/Cloud/02-Create-a-deployment.mdx +++ b/docs/content/Getting-Started/Cloud/02-Create-a-deployment.mdx @@ -35,7 +35,9 @@ and click Next: -Microsoft Azure is available in Cube Cloud on [Premium](https://cube.dev/pricing) tier. [Contact us](https://cube.dev/contact) for details. +Microsoft Azure is available in Cube Cloud on +[Premium](https://cube.dev/pricing) tier. [Contact us](https://cube.dev/contact) +for details. diff --git a/docs/content/Getting-Started/Cloud/03-Generate-models.mdx b/docs/content/Getting-Started/Cloud/03-Generate-models.mdx index 6eabd014eaf88..a8104be38141d 100644 --- a/docs/content/Getting-Started/Cloud/03-Generate-models.mdx +++ b/docs/content/Getting-Started/Cloud/03-Generate-models.mdx @@ -15,12 +15,12 @@ scratch or let Cube generate an initial version for you. ## Select tables -Start by selecting the database tables to generate the data schema from, then +Start by selecting the database tables to generate the data models from, then click Measures and Dimensions: ## Measures and dimensions @@ -30,7 +30,7 @@ Click Primary Keys to progress to the next step: ## Primary keys @@ -40,7 +40,7 @@ move to the next step: ## Joins @@ -50,7 +50,7 @@ click Review: ## Review @@ -62,7 +62,7 @@ click Confirm & Generate: Cube Cloud will now generate the models and spin up your Cube deployment, and in diff --git a/docs/content/Getting-Started/Cloud/05-Add-a-pre-aggregation.mdx b/docs/content/Getting-Started/Cloud/05-Add-a-pre-aggregation.mdx index 1ffb5a82ab4e0..80990c9b66f34 100644 --- a/docs/content/Getting-Started/Cloud/05-Add-a-pre-aggregation.mdx +++ b/docs/content/Getting-Started/Cloud/05-Add-a-pre-aggregation.mdx @@ -27,7 +27,7 @@ pre-aggregation to bring up the Rollup Designer: /> The Rollup Designer will automatically suggest a pre-aggregation for the query; -click Add to the Data Schema and then retry the query in the +click Add to the Data Model and then retry the query in the Playground. This time, the query should be accelerated with a pre-aggregation. It takes a bit of time to build a pre-aggregation, so the first run might not diff --git a/docs/content/Getting-Started/Core/04-Add-a-pre-aggregation.mdx b/docs/content/Getting-Started/Core/04-Add-a-pre-aggregation.mdx index 1f6bdf78985c1..b40e196ade0c1 100644 --- a/docs/content/Getting-Started/Core/04-Add-a-pre-aggregation.mdx +++ b/docs/content/Getting-Started/Core/04-Add-a-pre-aggregation.mdx @@ -27,7 +27,7 @@ pre-aggregation to bring up the Rollup Designer: /> The Rollup Designer will automatically suggest a pre-aggregation for the query; -click Add to the Data Schema and then retry the query in the +click Add to the Data Model and then retry the query in the Playground. This time, the query should be accelerated with a pre-aggregation: -Microsoft Azure is available in Cube Cloud on [Premium](https://cube.dev/pricing) tier. [Contact us](https://cube.dev/contact) for details. +Microsoft Azure is available in Cube Cloud on +[Premium](https://cube.dev/pricing) tier. [Contact us](https://cube.dev/contact) +for details. @@ -160,10 +162,10 @@ If you run into issues here, make sure to allow the Cube Cloud IPs to access your database. This means you need to enable these IPs in your firewall. If you are using AWS, this would mean adding a security group with allowed IPs. -## Step 5: Generate the Data Schema +## Step 5: Generate the Data Model -Step five in this case consists of generating a data schema. Start by selecting -the database tables to generate the data schema from, then +Step five in this case consists of generating data models. Start by selecting +the database tables to generate the data models from, then hit Generate.
@@ -175,9 +177,9 @@ hit Generate. />
-Cube Cloud will generate the data schema and spin up your Cube deployment. With +Cube Cloud will generate the data models and spin up your Cube deployment. With this, you're done. You've created a Cube deployment, configured a database -connection, and generated a data schema! +connection, and generated data models!
-Microsoft Azure is available in Cube Cloud on [Premium](https://cube.dev/pricing) tier. [Contact us](https://cube.dev/contact) for details. +Microsoft Azure is available in Cube Cloud on +[Premium](https://cube.dev/pricing) tier. [Contact us](https://cube.dev/contact) +for details. @@ -123,10 +125,10 @@ If you run into issues here, make sure to allow the Cube Cloud IPs to access your database. This means you need to enable these IPs in your firewall. If you are using AWS, this would mean adding a security group with allowed IPs. -## Step 5: Generate the Data Schema +## Step 5: Generate the Data Model -Step five in this case consists of generating a data schema. Start by selecting -the database tables to generate the data schema from, then +Step five in this case consists of generating data models. Start by selecting +the database tables to generate the data models from, then hit Generate.
@@ -138,9 +140,9 @@ hit Generate. />
-Cube Cloud will generate the data schema and spin up your Cube deployment. With +Cube Cloud will generate the data models and spin up your Cube deployment. With this, you're done. You've created a Cube deployment, configured a database -connection, and generated a data schema! +connection, and generated data models!
-Microsoft Azure is available in Cube Cloud on [Premium](https://cube.dev/pricing) tier. [Contact us](https://cube.dev/contact) for details. +Microsoft Azure is available in Cube Cloud on +[Premium](https://cube.dev/pricing) tier. [Contact us](https://cube.dev/contact) +for details. @@ -99,10 +101,10 @@ If you run into issues here, make sure to allow the Cube Cloud IPs to access your database. This means you need to enable these IPs in your firewall. If you are using AWS, this would mean adding a security group with allowed IPs. -## Step 5: Generate the Data Schema +## Step 5: Generate the Data Model -Step four in this case consists of generating a data schema. Start by selecting -the database tables to generate the data schema from, then +Step four in this case consists of generating data models. Start by selecting +the database tables to generate the data models from, then hit Generate.
@@ -114,9 +116,9 @@ hit Generate. />
-Cube Cloud will generate the data schema and spin up your Cube deployment. With +Cube Cloud will generate the data models and spin up your Cube deployment. With this, you're done. You've created a Cube deployment, configured a database -connection, and generated a data schema! +connection, and generated data models!
-Microsoft Azure is available in Cube Cloud on [Premium](https://cube.dev/pricing) tier. [Contact us](https://cube.dev/contact) for details. +Microsoft Azure is available in Cube Cloud on +[Premium](https://cube.dev/pricing) tier. [Contact us](https://cube.dev/contact) +for details. @@ -160,24 +162,24 @@ If you run into issues here, make sure to allow the Cube Cloud IPs to access your database. This means you need to enable these IPs in your firewall. If you are using AWS, this would mean adding a security group with allowed IPs. -## Step 5: Generate the Data Schema +## Step 5: Generate the Data Model -Step five in this case consists of generating a data schema. Start by selecting -the database tables to generate the data schema from, then +Step five in this case consists of generating data models. Start by selecting +the database tables to generate the data models from, then hit Generate.
Generating schemas for a new Cube Cloud deployment
-Cube Cloud will generate the data schema and spin up your Cube deployment. With +Cube Cloud will generate the data models and spin up your Cube deployment. With this, you're done. You've created a Cube deployment, configured a database -connection, and generated a data schema! +connection, and generated data models!
-Microsoft Azure is available in Cube Cloud on [Premium](https://cube.dev/pricing) tier. [Contact us](https://cube.dev/contact) for details. +Microsoft Azure is available in Cube Cloud on +[Premium](https://cube.dev/pricing) tier. [Contact us](https://cube.dev/contact) +for details. diff --git a/docs/content/GraphQL-API/GraphQL-API.mdx b/docs/content/GraphQL-API/GraphQL-API.mdx index ac35838463268..1bc71f0858f7b 100644 --- a/docs/content/GraphQL-API/GraphQL-API.mdx +++ b/docs/content/GraphQL-API/GraphQL-API.mdx @@ -9,9 +9,9 @@ menuOrder: 3 First, ensure you're running Cube v0.28.58 or later. Then start the project locally in development mode, and navigate to `http://localhost:4000/` in your -browser. After generating schema and running query you should see the GraphiQL -interface if you click 'GraphiQL' button. If you click the 'Docs' button in the -top-right, you can explore the introspected schema. +browser. After generating data models and running query you should see the +GraphiQL interface if you click 'GraphiQL' button. If you click the 'Docs' +button in the top-right, you can explore the introspected schema. As an example, let's use the `Orders` cube from the example eCommerce database: diff --git a/docs/content/REST-API/Query-Format.mdx b/docs/content/REST-API/Query-Format.mdx index 30afb962255e6..932a3d699163c 100644 --- a/docs/content/REST-API/Query-Format.mdx +++ b/docs/content/REST-API/Query-Format.mdx @@ -33,7 +33,7 @@ A Query has the following properties: - `timeDimensions`: A convenient way to specify a time dimension with a filter. It is an array of objects in [timeDimension format.](#time-dimensions-format) - `segments`: An array of segments. A segment is a named filter, created in the - Data Schema. + data model. - `limit`: A row limit for your query. The default value is `10000`. The maximum allowed limit is `50000`. If you'd like to request more rows than the maximum allowed limit, consider using [pagination][ref-recipe-pagination]. diff --git a/docs/content/Reference/CLI/CLI-Reference.mdx b/docs/content/Reference/CLI/CLI-Reference.mdx index b8e9bc310d524..f807ab06c4c6f 100644 --- a/docs/content/Reference/CLI/CLI-Reference.mdx +++ b/docs/content/Reference/CLI/CLI-Reference.mdx @@ -69,7 +69,7 @@ npx cubejs-cli server ## generate -The `generate` command helps to build data schema for existing database tables. +The `generate` command helps to build data models for existing database tables. You can only run `generate` from the Cube app directory. This command requires an active [database connection](/config/databases). @@ -81,13 +81,13 @@ npx cubejs-cli generate -t TABLE-NAMES ### <--{"id" : "generate"}--> Flags -| Parameter | Description | Values | -| ----------------------- | ------------------------------------------------------ | --------------------------- | -| `-t, --tables ` | Comma delimited list of tables to generate schema for. | `TABLE-NAME-1,TABLE-NAME-2` | +| Parameter | Description | Values | +| ----------------------- | ----------------------------------------------------------- | --------------------------- | +| `-t, --tables ` | Comma delimited list of tables to generate data models for. | `TABLE-NAME-1,TABLE-NAME-2` | ### <--{"id" : "generate"}--> Example -Generate schema files for tables `orders` and `customers`: +Generate data model files for tables `orders` and `customers`: ```bash{promptUser: user} npx cubejs-cli generate -t orders,customers diff --git a/docs/content/Reference/Configuration/Config.mdx b/docs/content/Reference/Configuration/Config.mdx index f92909224b381..c879c4ef4eaa3 100644 --- a/docs/content/Reference/Configuration/Config.mdx +++ b/docs/content/Reference/Configuration/Config.mdx @@ -198,8 +198,8 @@ module.exports = { It is a [Multitenancy Setup][ref-multitenancy] option. `contextToAppId` is a function to determine an App ID which is used as caching -key for various in-memory structures like schema compilation results, connection -pool, etc. +key for various in-memory structures like data model compilation results, +connection pool, etc. Called on each request. @@ -247,9 +247,9 @@ module.exports = { ### <--{"id" : "Options Reference"}--> repositoryFactory -This option allows to customize the repository for Cube data schema files. It is +This option allows to customize the repository for Cube data model files. It is a function, which accepts a context object and can dynamically select -repositories with schema files based on +repositories with data model files based on [`SchemaFileRepository`][self-schemafilerepo] contract. Learn more about it in [Multitenancy guide][ref-multitenancy]. @@ -419,13 +419,13 @@ module.exports = { ### <--{"id" : "Options Reference"}--> schemaVersion -Schema version can be used to tell Cube schema should be recompiled in case -schema code depends on dynamic definitions fetched from some external database -or API. This method is called on each request however `RequestContext` parameter -is reused per application ID as determined by +Schema version can be used to tell Cube that the data model should be recompiled +in case it depends on dynamic definitions fetched from some external database or +API. This method is called on each request however `RequestContext` parameter is +reused per application ID as determined by [`contextToAppId`][self-opts-ctx-to-appid]. If the returned string is different, -the schema will be recompiled. It can be used in both multi-tenant and single -tenant environments. +the data model will be recompiled. It can be used in both multi-tenant and +single tenant environments. ```javascript const tenantIdToDbVersion = {}; @@ -552,8 +552,8 @@ values that your extendContext object key can have. `extendContext` is applied only to requests that go through API. It isn't applied to refresh worker execution. If you're looking for a way to provide -global environment variables for your schema please see [Execution environment -docs][ref-exec-environment-globals]. +global environment variables for your data model, please see [Execution +environment docs][ref-exec-environment-globals]. @@ -566,7 +566,7 @@ module.exports = { }; ``` -You can use the custom value from extend context in your data schema like this: +You can use the custom value from extend context in your data model like this: ```javascript const { activeOrganization } = COMPILE_CONTEXT; @@ -578,19 +578,19 @@ cube(`Users`, { ### <--{"id" : "Options Reference"}--> compilerCacheSize -Maximum number of compiled schemas to persist with in-memory cache. Defaults to -250, but optimum value will depend on deployed environment. When the max is -reached, will start dropping the least recently used schemas from the cache. +Maximum number of compiled data models to persist with in-memory cache. Defaults +to 250, but optimum value will depend on deployed environment. When the max is +reached, will start dropping the least recently used data models from the cache. ### <--{"id" : "Options Reference"}--> maxCompilerCacheKeepAlive -Maximum length of time in ms to keep compiled schemas in memory. Default keeps -schemas in memory indefinitely. +Maximum length of time in ms to keep compiled data models in memory. Default +keeps data models in memory indefinitely. ### <--{"id" : "Options Reference"}--> updateCompilerCacheKeepAlive -Providing `updateCompilerCacheKeepAlive: true` keeps frequently used schemas in -memory by reseting their `maxCompilerCacheKeepAlive` every time they are +Providing `updateCompilerCacheKeepAlive: true` keeps frequently used data models +in memory by reseting their `maxCompilerCacheKeepAlive` every time they are accessed. ### <--{"id" : "Options Reference"}--> allowUngroupedWithoutPrimaryKey @@ -601,9 +601,9 @@ check for `ungrouped` queries. ### <--{"id" : "Options Reference"}--> telemetry Cube collects high-level anonymous usage statistics for servers started in -development mode. It doesn't track any credentials, schema contents or queries -issued. This statistics is used solely for the purpose of constant cube.js -improvement. +development mode. It doesn't track any credentials, data model contents or +queries issued. This statistics is used solely for the purpose of constant +cube.js improvement. You can opt out of it any time by setting `telemetry` option to `false` or, alternatively, by setting `CUBEJS_TELEMETRY` environment variable to `false`. @@ -727,8 +727,8 @@ module.exports = { ### <--{"id" : "Options Reference"}--> allowJsDuplicatePropsInSchema Boolean to enable or disable a check duplicate property names in all objects of -a schema. The default value is `false`, and it is means the compiler would use -the additional transpiler for check duplicates. +a data model. The default value is `false`, and it is means the compiler would +use the additional transpiler for check duplicates. ### <--{"id" : "Options Reference"}--> initApp @@ -772,12 +772,12 @@ system-level settings. Please use `CUBEJS_DB_QUERY_TIMEOUT` and Timeout and interval options' values are in seconds. -| Option | Description | Default Value | -| ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------- | -| concurrency | Maximum number of queries to be processed simultaneosly. For drivers with connection pool `CUBEJS_DB_MAX_POOL` should be adjusted accordingly. Typically pool size should be at least twice of total concurrency among all queues. | `2` | -| executionTimeout | Total timeout of single query | `600` | -| orphanedTimeout | Query will be marked for cancellation if not requested during this period. | `120` | -| heartBeatInterval | Worker heartbeat interval. If `4*heartBeatInterval` time passes without reporting, the query gets cancelled. | `30` | +| Option | Description | Default Value | +| ----------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------- | +| concurrency | Maximum number of queries to be processed simultaneosly. For drivers with connection pool `CUBEJS_DB_MAX_POOL` should be adjusted accordingly. Typically pool size should be at least twice of total concurrency among all queues. | `2` | +| executionTimeout | Total timeout of single query | `600` | +| orphanedTimeout | Query will be marked for cancellation if not requested during this period. | `120` | +| heartBeatInterval | Worker heartbeat interval. If `4*heartBeatInterval` time passes without reporting, the query gets cancelled. | `30` | ## RequestContext @@ -796,8 +796,8 @@ and sets it to `req.securityContext`. The default implementation of the `SchemaFileRepository` contract is defined by the [`FileRepository`][gh-cube-filerepo] class. When using -[`FileRepository`][gh-cube-filerepo], all schema files must be within the same -directory. +[`FileRepository`][gh-cube-filerepo], all data model files must be within the +same directory. diff --git a/docs/content/Reference/Configuration/Environment-Variables-Reference.mdx b/docs/content/Reference/Configuration/Environment-Variables-Reference.mdx index cb652d83d5e11..413e434cbdb2c 100644 --- a/docs/content/Reference/Configuration/Environment-Variables-Reference.mdx +++ b/docs/content/Reference/Configuration/Environment-Variables-Reference.mdx @@ -83,7 +83,7 @@ The name of the AWS Athena catalog to use for queries. ## `CUBEJS_DB_SCHEMA` The name of the schema to use as `information_schema` filter. Reduces count of -tables loaded during schema generation. +tables loaded during data model generation. | Possible Values | Default in Development | Default in Production | | ------------------- | ---------------------- | --------------------- | @@ -93,9 +93,9 @@ tables loaded during schema generation. The cache and queue driver to use for the Cube deployment. -| Possible Values | Default in Development | Default in Production | -| ----------------- | ---------------------- | --------------------- | -| `cubestore`, `memory` | `memory` | `cubestore` | +| Possible Values | Default in Development | Default in Production | +| --------------------- | ---------------------- | --------------------- | +| `cubestore`, `memory` | `memory` | `cubestore` | ## `CUBEJS_CONCURRENCY` @@ -560,8 +560,9 @@ touch within [`CUBEJS_TOUCH_PRE_AGG_TIMEOUT`](#cubejs-touch-pre-agg-timeout). Pre-aggregations are touched whenever they are rebuilt or a Refresh Worker checks its freshness. The first drop will be initiated when the Refresh Worker is able to check freshness for every `scheduledRefresh: true` pre-aggregation. -If you have multiple Refresh Workers with different schema versions sharing the -same Cube Store cluster, then touches from both refresh workers are respected. +If you have multiple Refresh Workers with different data model versions sharing +the same Cube Store cluster, then touches from both refresh workers are +respected. | Possible Values | Default in Development | Default in Production | | --------------- | ---------------------- | --------------------- | @@ -712,8 +713,8 @@ pre-aggregations. -This environment variable is deprecated. Update to v0.32.0 or later -to use Cube Store instead of Redis. +This environment variable is deprecated. Update to v0.32.0 or later to use Cube +Store instead of Redis. @@ -727,8 +728,8 @@ The password used to connect to the Redis server. -This environment variable is deprecated. Update to v0.32.0 or later -to use Cube Store instead of Redis. +This environment variable is deprecated. Update to v0.32.0 or later to use Cube +Store instead of Redis. @@ -744,8 +745,8 @@ than [`CUBEJS_REDIS_POOL_MIN`](#cubejs-redis-pool-min). -This environment variable is deprecated. Update to v0.32.0 or later -to use Cube Store instead of Redis. +This environment variable is deprecated. Update to v0.32.0 or later to use Cube +Store instead of Redis. @@ -761,8 +762,8 @@ than [`CUBEJS_REDIS_POOL_MAX`](#cubejs-redis-pool-max). -This environment variable is deprecated. Update to v0.32.0 or later -to use Cube Store instead of Redis. +This environment variable is deprecated. Update to v0.32.0 or later to use Cube +Store instead of Redis. @@ -777,8 +778,8 @@ authentication. -This environment variable is deprecated. Update to v0.32.0 or later -to use Cube Store instead of Redis. +This environment variable is deprecated. Update to v0.32.0 or later to use Cube +Store instead of Redis. @@ -792,8 +793,8 @@ The host URL for a Redis server. -This environment variable is deprecated. Update to v0.32.0 or later -to use Cube Store instead of Redis. +This environment variable is deprecated. Update to v0.32.0 or later to use Cube +Store instead of Redis. @@ -841,11 +842,11 @@ for][ref-config-sched-refresh-timer]. Used in conjunction with ## `CUBEJS_SCHEMA_PATH` -The path where Cube loads schemas from. +The path where Cube loads data models from. -| Possible Values | Default in Development | Default in Production | -| ------------------------------------ | ---------------------- | --------------------- | -| A valid path containing Cube schemas | `schema` | `schema` | +| Possible Values | Default in Development | Default in Production | +| ---------------------------------------- | ---------------------- | --------------------- | +| A valid path containing Cube data models | `schema` | `schema` | ## `CUBEJS_SQL_PASSWORD` diff --git a/docs/content/Reference/REST-API/REST-API.mdx b/docs/content/Reference/REST-API/REST-API.mdx index c053f88fa62f3..c96961d07fa8d 100644 --- a/docs/content/Reference/REST-API/REST-API.mdx +++ b/docs/content/Reference/REST-API/REST-API.mdx @@ -22,7 +22,7 @@ Response query. - `data` - Formatted dataset of query results. - `annotation` - Metadata for query. Contains descriptions for all query items. - - `title` - Human readable title from data schema. + - `title` - Human readable title from the data model. - `shortTitle` - Short title for visualization usage (ex. chart overlay) - `type` - Data type @@ -152,7 +152,7 @@ Example response: ## `/v1/meta` -Get meta-information for cubes defined in data schema +Get meta-information for cubes defined in the data model. Response diff --git a/docs/content/SQL-API/Authentication-and-Authorization.mdx b/docs/content/SQL-API/Authentication-and-Authorization.mdx index 7bc6bc1ae06ef..634575f3b13ab 100644 --- a/docs/content/SQL-API/Authentication-and-Authorization.mdx +++ b/docs/content/SQL-API/Authentication-and-Authorization.mdx @@ -36,8 +36,8 @@ additional security. ## Security Context (Row-Level Security) -Cube's SQL API can also use the Security Context for [Dynamic Schema -Creation][ref-dynamic-schemas] or [`queryRewrite`][ref-config-queryrewrite] +Cube's SQL API can also use the Security Context for [Dynamic data model +creation][ref-dynamic-schemas] or [`queryRewrite`][ref-config-queryrewrite] property in your [`cube.js` configuration file][ref-config-js]. By default, the SQL API uses the current user's Security Context, but this diff --git a/docs/content/SQL-API/Joins.mdx b/docs/content/SQL-API/Joins.mdx index 6b6411143bd67..38503f165eda2 100644 --- a/docs/content/SQL-API/Joins.mdx +++ b/docs/content/SQL-API/Joins.mdx @@ -8,7 +8,7 @@ menuOrder: 3 The SQL API supports joins through `__cubeJoinField` virtual column, which is available in every cube table. Join can also be done through `CROSS JOIN`. Usage of `__cubeJoinField` in a join instructs Cube to perform join as it's defined in -a data schema. Cube generates the correct joining conditions for the underlying +a data model. Cube generates the correct joining conditions for the underlying data source. For example, the following query joins the `Orders` and `Products` tables under diff --git a/docs/content/SQL-API/Overview.mdx b/docs/content/SQL-API/Overview.mdx index cfd1ffbf663dd..0b17d8c93bf2d 100644 --- a/docs/content/SQL-API/Overview.mdx +++ b/docs/content/SQL-API/Overview.mdx @@ -223,10 +223,10 @@ see below, the sorting operation is done after Cube query and projection. +--- CubeScanExecutionPlan ``` -Because of the default limit in Cube queries (50,000 rows), there is a possibility -of a wrong result if there are more than 50,000 rows. Given that queries to Cube -are usually aggregated, it is rare that they may return more than 50,000 rows, but -keep that limitation in mind when designing your queries. +Because of the default limit in Cube queries (50,000 rows), there is a +possibility of a wrong result if there are more than 50,000 rows. Given that +queries to Cube are usually aggregated, it is rare that they may return more +than 50,000 rows, but keep that limitation in mind when designing your queries. ### <--{"id" : "Querying cube tables"}--> Limit @@ -236,14 +236,15 @@ limitation. ## Enabling SQL API in Cube Cloud -To enable the SQL API in Cube Cloud, click Deploy SQL API from the Overview page, then click How to connect your BI tool. You -should then see the following screen: +To enable the SQL API in Cube Cloud, click Deploy SQL API from +the Overview page, then click How to connect your BI +tool. You should then see the following screen: ![SQL API details modal|690x428](https://ucarecdn.com/67508334-1641-43ec-9d50-a8f64629992b/) ## Examples -Consider the following schema. +Consider the following data model: ```javascript cube(`Orders`, { @@ -395,7 +396,7 @@ cube('Orders', { }) ``` -As we can see, we have a mix of measure types in the above schema. To query +As we can see, we have a mix of measure types in the above data model. To query them, we could use the following SQL statements: ```sql @@ -423,8 +424,8 @@ SELECT MAX(maxValue) FROM Orders ### <--{"id" : "Examples"}--> Querying Segments -Any segments defined in a schema can also be used in Cube SQL queries. Looking -at the schema below, we have one segment `isCompleted`: +Any segments defined in a data model can also be used in Cube SQL queries. +Looking at the data model below, we have one segment `isCompleted`: ```javascript cube('Orders', { diff --git a/docs/content/Schema/Advanced/Code-Reusability-Export-and-Import.mdx b/docs/content/Schema/Advanced/Code-Reusability-Export-and-Import.mdx index 24d28f8b48786..f63c1265f5aed 100644 --- a/docs/content/Schema/Advanced/Code-Reusability-Export-and-Import.mdx +++ b/docs/content/Schema/Advanced/Code-Reusability-Export-and-Import.mdx @@ -11,18 +11,18 @@ redirect_from: -This functionality only works with schemas written in JavaScript, not YAML. +This functionality only works with data models written in JavaScript, not YAML. -In Cube, your data schema is code, and code is much easier to manage when it is +In Cube, your data model is code, and code is much easier to manage when it is in small, digestible chunks. It is best practice to keep files small and -containing only relevant and non-duplicated code. As your data schema grows, +containing only relevant and non-duplicated code. As your data model grows, maintaining and debugging is much easier with a well-organized codebase. -Cube schemas in JavaScript supports ES6-style [`export`][mdn-js-es6-export] and -[`import`][mdn-js-es6-import] statements, which allow writing code in one file -and sharing it, so it can be used by another file or files. +Cube data models in JavaScript supports ES6-style [`export`][mdn-js-es6-export] +and [`import`][mdn-js-es6-import] statements, which allow writing code in one +file and sharing it, so it can be used by another file or files. There are several typical use cases in Cube where it is considered best practice to extract some variables or functions and then import it when needed. @@ -121,11 +121,11 @@ import { capitalize } from './schema_utils'; export const capitalize = (s) => s.charAt(0).toUpperCase() + s.slice(1); ``` -### Expose environment variables to schema files +### Expose environment variables to data model files A common use-case is to disable pre-aggregations unless running in "production" or "staging" environments. The best approach is to add an environment variable -and make it available in schema files. +and make it available in data model files. In the example below we have the following file structure: @@ -139,13 +139,22 @@ In the example below we have the following file structure: └── Sales └── Orders.js ``` -In the `utils.js` file we read the environment variable and expose it. We will default the environment variable to the value `dev`. + +In the `utils.js` file we read the environment variable and expose it. We will +default the environment variable to the value `dev`. + ```javascript // in utils.js -const environment = typeof process.env.ENVIRONMENT === 'undefined' ? 'dev' : process.env.ENVIRONMENT.toLowerCase(); +const environment = + typeof process.env.ENVIRONMENT === 'undefined' + ? 'dev' + : process.env.ENVIRONMENT.toLowerCase(); exports.environment = () => environment; ``` -In the schema file, the definition will change depending on the value of the environment value. + +In the data model file, the definition will change depending on the value of the +environment value. + ```javascript // in model/Sales/Orders.js import {environment} from '../utils' diff --git a/docs/content/Schema/Advanced/Code-Reusability-Extending-Cubes.mdx b/docs/content/Schema/Advanced/Code-Reusability-Extending-Cubes.mdx index 49b94d158df15..01bfddc23fe7b 100644 --- a/docs/content/Schema/Advanced/Code-Reusability-Extending-Cubes.mdx +++ b/docs/content/Schema/Advanced/Code-Reusability-Extending-Cubes.mdx @@ -11,7 +11,7 @@ redirect_from: Cube supports the [`extends` feature][ref-schema-ref-cube-extends], which allows you to reuse all declared members of a cube. This is a foundation for building -reusable data schemas. +reusable data models. [Cubes][ref-schema-concepts-cubes] are represented as [JavaScript objects][mdn-js-objects] with such properties as measures, dimensions, and diff --git a/docs/content/Schema/Advanced/Dynamic-Schema-Creation.mdx b/docs/content/Schema/Advanced/Dynamic-Schema-Creation.mdx index a276bb9f4bbfd..16992cb8f16a9 100644 --- a/docs/content/Schema/Advanced/Dynamic-Schema-Creation.mdx +++ b/docs/content/Schema/Advanced/Dynamic-Schema-Creation.mdx @@ -10,7 +10,7 @@ redirect_from: -This functionality only works with schemas written in JavaScript, not YAML. +This functionality only works with data models written in JavaScript, not YAML.
@@ -21,30 +21,30 @@ please post an issue on GitHub.
-Cube allows schemas to be created on-the-fly using a special +Cube allows data models to be created on-the-fly using a special [`asyncModule()`][ref-async-module] function only available in the [schema execution environment][ref-schema-env]. `asyncModule()` allows registering an -async function to be executed at the end of the data schema compile phase so +async function to be executed at the end of the data model compile phase so additional definitions can be added. This is often useful in situations where schema properties can be dynamically updated through an API, for example. -Each `asyncModule` call will be invoked only once per schema compilation. +Each `asyncModule` call will be invoked only once per data model compilation. [ref-schema-env]: /schema/reference/execution-environment [ref-async-module]: /schema/reference/execution-environment#asyncmodule -When creating schemas via `asyncModule()`, it is important to be aware of the -following differences compared to statically defining schemas with `cube()`: +When creating data models via `asyncModule()`, it is important to be aware of +the following differences compared to statically defined ones with `cube()`: - The `sql` and `drillMembers` properties for both dimensions and measures must be of type `() => string` and `() => string[]` accordingly -Cube supports importing JavaScript logic from other files in a schema, so it is -useful to declare utility functions for handling the above differences in a +Cube supports importing JavaScript logic from other files in a data model, so it +is useful to declare utility functions for handling the above differences in a separate file: [ref-import-export]: /recipes/export-import @@ -97,7 +97,7 @@ export const transformMeasures = (measures) => { In the following example, we retrieve a JSON object representing all our cubes using `fetch()`, transform some of the properties to be functions that return a string, and then finally use the [`cube()` global function][ref-globals] to -generate schemas from that data: +generate data models from that data: [ref-globals]: /schema/reference/execution-environment#cube-js-globals-cube-and-others @@ -156,8 +156,8 @@ asyncModule(async () => { ## Usage with schemaVersion -It is also useful to be able to recompile the schema when there are changes in -the underlying input data. For this purpose, the [`schemaVersion` +It is also useful to be able to recompile the data model when there are changes +in the underlying input data. For this purpose, the [`schemaVersion` ][link-config-schema-version] value in the `cube.js` configuration options can be specified as an asynchronous function: @@ -241,7 +241,7 @@ file][ref-config]. [ref-config-driverfactory]: /config#driver-factory [ref-config]: /config -For an example scenario where schemas may use either MySQL or Postgres +For an example scenario where data models may use either MySQL or Postgres databases, you could do the following: ```javascript diff --git a/docs/content/Schema/Advanced/Polymorphic-Cubes.mdx b/docs/content/Schema/Advanced/Polymorphic-Cubes.mdx index 51d40f75623f0..d96e1b462b048 100644 --- a/docs/content/Schema/Advanced/Polymorphic-Cubes.mdx +++ b/docs/content/Schema/Advanced/Polymorphic-Cubes.mdx @@ -34,7 +34,7 @@ has both `teacher_id` and `student_id`, which are actually references to the | 100 | 31 | 1 | Multiplication and the meaning of the Factors | | 101 | 31 | 2 | Division as an Unknown Factor Problem | -The best way to design such a schema is by using what we call **Polymorphic +The best way to design such a data model is by using what we call **Polymorphic Cubes**. It relies on the [`extends`][ref-schema-ref-cubes-extends] feature and prevents you from duplicating code, while preserving the correct domain logic. Learn more about using [`extends` here][ref-schema-advanced-extend]. diff --git a/docs/content/Schema/Fundamentals/Additional-Concepts.mdx b/docs/content/Schema/Fundamentals/Additional-Concepts.mdx index 0679150419372..4bedcd1311d12 100644 --- a/docs/content/Schema/Fundamentals/Additional-Concepts.mdx +++ b/docs/content/Schema/Fundamentals/Additional-Concepts.mdx @@ -18,7 +18,7 @@ tables. See [`ResultSet.drillDown()`][ref-cubejs-client-ref-resultset-drilldown] on how to use this feature on the client side. A drilldown is defined on the [measure][ref-schema-ref-measures] level in your -data schema. It’s defined as a list of dimensions called **drill members**. Once +data model. It’s defined as a list of dimensions called **drill members**. Once defined, these drill members will always be used to show underlying data when drilling into that measure. @@ -243,7 +243,7 @@ A subquery requires referencing at least one [measure][ref-schema-ref-measures] in its definition. Generally speaking, all the columns used to define a subquery dimension should first be defined as [measures][ref-schema-ref-measures] on their respective cubes and then referenced from a subquery dimension over a -[join][ref-schema-ref-joins]. For example the following schema will **not** +[join][ref-schema-ref-joins]. For example the following data model will **not** work: diff --git a/docs/content/Schema/Fundamentals/Concepts.mdx b/docs/content/Schema/Fundamentals/Concepts.mdx index 6c36185e89354..19d23f622bf0a 100644 --- a/docs/content/Schema/Fundamentals/Concepts.mdx +++ b/docs/content/Schema/Fundamentals/Concepts.mdx @@ -45,7 +45,7 @@ your database using the [`sql` property][ref-schema-ref-sql]: ```javascript -cube('Orders', { +cube("Orders", { sql: `SELECT * FROM orders`, }); ``` @@ -64,7 +64,7 @@ queries too: ```javascript -cube('Orders', { +cube("Orders", { sql: ` SELECT * @@ -100,19 +100,18 @@ Dimensions represent the properties of a **single** data point in the cube. ```javascript -cube('Orders', { - +cube("Orders", { dimensions: { id: { sql: `id`, type: `number`, // Here we explicitly let Cube know this field is the primary key // This is required for de-duplicating results when using joins - primaryKey: true + primaryKey: true, }, status: { sql: `status`, - type: `string` + type: `string`, }, }, }); @@ -142,21 +141,20 @@ represented as follows: ```javascript -cube('LineItems', { - +cube("LineItems", { dimensions: { id: { sql: `id`, type: `number`, // Again, we explicitly let Cube know this field is the primary key // This is required for de-duplicating results when using joins - primaryKey: true + primaryKey: true, }, order_id: { sql: `order_id`, type: `number`, - } + }, }, }); ``` @@ -188,16 +186,16 @@ Time-based properties should be represented as dimensions with type `time`. Time dimensions allow grouping the result set by a unit of time (e.g. hours, days, weeks). In analytics, this is also known as "granularity". -We can add the necessary time dimensions to both schemas as follows: +We can add the necessary time dimensions to both data models as follows: ```javascript -cube('Orders', { +cube("Orders", { dimensions: { created_at: { sql: `created_at`, - type: `time` + type: `time`, }, completed_at: { @@ -213,10 +211,10 @@ cubes: - name: Orders dimensions: - name: created_at - sql: 'created_at' + sql: "created_at" type: time - name: completed_at - sql: 'completed_at' + sql: "completed_at" type: time ``` @@ -225,11 +223,11 @@ cubes: ```javascript -cube('LineItems', { +cube("LineItems", { dimensions: { created_at: { sql: `created_at`, - type: `time` + type: `time`, }, }, }); @@ -259,7 +257,7 @@ following: ```javascript -cube('Orders', { +cube("Orders", { measures: { count: { type: `count`, @@ -284,14 +282,14 @@ of line items sold: ```javascript -cube('LineItems', { +cube("LineItems", { measures: { total: { sql: `price`, type: `sum`, }, }, -}) +}); ``` ```yaml @@ -327,8 +325,7 @@ In the following example, we are left-joining the `LineItems` cube onto our ```javascript -cube('Orders', { - +cube("Orders", { joins: { LineItems: { relationship: `many_to_one`, @@ -358,7 +355,7 @@ There are the three [types of join relationships][ref-schema-ref-joins-types]: ## Segments -Segments are filters that are predefined in the schema instead of [a Cube +Segments are filters that are predefined in the data model instead of [a Cube query][ref-backend-query-filters]. They allow simplifying Cube queries and make it easy to re-use common filters across a variety of queries. @@ -368,10 +365,10 @@ following: ```javascript -cube('Orders', { +cube("Orders", { segments: { only_completed: { - sql: `${CUBE}.status = 'completed'` + sql: `${CUBE}.status = 'completed'`, }, }, }); @@ -396,13 +393,13 @@ schema, they are defined under the `preAggregations` property: ```javascript -cube('Orders', { +cube("Orders", { preAggregations: { main: { measures: [CUBE.count], dimensions: [CUBE.status], timeDimension: CUBE.created_at, - granularity: 'day', + granularity: "day", }, }, }); diff --git a/docs/content/Schema/Fundamentals/Working-with-Joins.mdx b/docs/content/Schema/Fundamentals/Working-with-Joins.mdx index 1fae1c51588e6..47ae38104af37 100644 --- a/docs/content/Schema/Fundamentals/Working-with-Joins.mdx +++ b/docs/content/Schema/Fundamentals/Working-with-Joins.mdx @@ -22,7 +22,7 @@ To use an example, let's use two cubes, `Customers` and `Orders`: ```javascript -cube('Customers', { +cube("Customers", { dimensions: { id: { primaryKey: true, @@ -36,7 +36,7 @@ cube('Customers', { }, }); -cube('Orders', { +cube("Orders", { dimensions: { id: { primaryKey: true, @@ -46,8 +46,8 @@ cube('Orders', { customer_id: { sql: `customer_id`, type: `number`, - } - } + }, + }, }); ``` @@ -81,7 +81,7 @@ We could add a join to the `Customers` cube: ```javascript -cube('Customers', { +cube("Customers", { joins: { Orders: { relationship: `one_to_many`, @@ -146,7 +146,7 @@ with a `many_to_one` relationship on the `Orders` cube: ```javascript -cube('Orders', { +cube("Orders", { joins: { Customers: { relationship: `many_to_one`, @@ -167,9 +167,9 @@ cubes: -In the above schema, our `Orders` cube defines the relationship between itself -and the `Customer` cube. The same JSON query now results in the following SQL -query: +In the above data model, our `Orders` cube defines the relationship between +itself and the `Customer` cube. The same JSON query now results in the following +SQL query: ``` SELECT @@ -195,9 +195,9 @@ retrieved. In Cube, joins only need to be defined from one direction. In the above example, -we explicitly _removed_ the `one_to_many` relationship from the `Customer` cube; not -doing so would cause the query to fail as Cube would be unable to determine a -valid join path. [Click here][self-join-direction] to learn more about how the +we explicitly _removed_ the `one_to_many` relationship from the `Customer` cube; +not doing so would cause the query to fail as Cube would be unable to determine +a valid join path. [Click here][self-join-direction] to learn more about how the direction of joins affects query results. @@ -241,9 +241,9 @@ and declare the relationships from it to `Topics` cube and from `Posts` to -The following example uses the `one_to_many` relationship on the `PostTopics` cube; -this causes the direction of joins to be `Posts -> PostTopics -> Topics`. [Read -more about direction of joins here][self-join-direction]. +The following example uses the `one_to_many` relationship on the `PostTopics` +cube; this causes the direction of joins to be `Posts -> PostTopics -> Topics`. +[Read more about direction of joins here][self-join-direction]. @@ -483,7 +483,7 @@ cubes: -The following diagram shows our data schema with the `Campaigns` cube: +The following diagram shows our data model with the `Campaigns` cube:
The last piece is to finally declare a many-to-many relationship. This should be -done by declaring a [`one_to_many` relationship][ref-schema-ref-joins-relationship] -on the associative cube, `Campaigns` in our case. +done by declaring a [`one_to_many` +relationship][ref-schema-ref-joins-relationship] on the associative cube, +`Campaigns` in our case. @@ -609,13 +610,13 @@ result set. As an example, let's take two cubes, `Orders` and `Customers`: ```javascript -cube('Orders', { +cube("Orders", { sql: `SELECT * FROM orders`, measures: { count: { - sql: 'id', - type: 'count', + sql: "id", + type: "count", }, }, @@ -632,13 +633,13 @@ cube('Orders', { }, }); -cube('Customers', { +cube("Customers", { sql: `SELECT * FROM customers`, measures: { count: { - sql: 'id', - type: 'count', + sql: "id", + type: "count", }, }, @@ -693,7 +694,7 @@ the `Customers` cube so that we do not lose data from anonymous orders: ```javascript -cube('Orders', { +cube("Orders", { sql: `SELECT * FROM orders`, joins: { @@ -705,13 +706,13 @@ cube('Orders', { measures: { count: { - sql: 'id', - type: 'count', + sql: "id", + type: "count", }, total_revenue: { - sql: 'revenue', - type: 'sum', + sql: "revenue", + type: "sum", }, }, @@ -759,7 +760,7 @@ cubes: -After adding the join to the schema, we can query the cube as follows: +After adding the join to the data model, we can query the cube as follows: ```json { @@ -782,7 +783,7 @@ instance, we declare the join in the `Customers` cube: ```javascript -cube('Customers', { +cube("Customers", { sql: `SELECT * FROM customers`, joins: { @@ -794,8 +795,8 @@ cube('Customers', { measures: { count: { - sql: 'id', - type: 'count', + sql: "id", + type: "count", }, }, @@ -867,13 +868,13 @@ Views can also be used in Cube to represent the previous two scenarios: ```javascript -view('TotalRevenuePerCustomer', { +view("TotalRevenuePerCustomer", { description: `Total revenue per customer`, includes: [Orders.total_revenue, Users.company], }); -view('CustomersWithoutOrders', { +view("CustomersWithoutOrders", { description: `Customers without orders`, includes: [Users.company], @@ -1073,6 +1074,8 @@ cubes: [ref-schema-ref-joins]: /schema/reference/joins -[ref-schema-ref-joins-relationship]: /schema/reference/joins#parameters-relationship -[self-many-to-many-no-assoc-table]: #many-to-many-relationship-without-an-associative-table +[ref-schema-ref-joins-relationship]: + /schema/reference/joins#parameters-relationship +[self-many-to-many-no-assoc-table]: + #many-to-many-relationship-without-an-associative-table [self-join-direction]: /schema/fundamentals/joins#directions-of-joins diff --git a/docs/content/Schema/Getting-Started.mdx b/docs/content/Schema/Getting-Started.mdx index 93c57c631439d..faf700cf05d6f 100644 --- a/docs/content/Schema/Getting-Started.mdx +++ b/docs/content/Schema/Getting-Started.mdx @@ -6,10 +6,11 @@ category: Data Modeling menuOrder: 1 --- -A Cube Data Schema is used to model raw data into meaningful business -definitions and pre-aggregate data for optimal results. The data schema is +A Cube data model is used to transform raw data into meaningful business +definitions and pre-aggregate data for optimal results. The data model is exposed through the [querying API][ref-backend-restapi] that allows end-users to -query a wide variety of analytical queries without modifying the schema itself. +query a wide variety of analytical queries without modifying the data model +itself. Let’s use a users table with the following columns as an example: @@ -28,7 +29,7 @@ We can start with a set of simple questions about users we want to answer: - What is the percentage of paying users out of the total? - How many users, paying or not, are from different cities and companies? -We don’t need to write SQL queries for every question, since the data schema +We don’t need to write SQL queries for every question, since the data model allows building well-organized and reusable SQL. ## 1. Creating a Cube @@ -274,7 +275,7 @@ As with other measures, `paying_percentage` can be used with dimensions. 1. [Examples][ref-examples] 2. [Query format][ref-backend-query-format] 3. [REST API][ref-backend-restapi] -4. [Schema reference documentation][ref-schema-cube] +4. [Data model reference documentation][ref-schema-cube] [ref-backend-restapi]: /rest-api [ref-schema-cube]: /schema/reference/cube diff --git a/docs/content/Schema/Reference/cube.mdx b/docs/content/Schema/Reference/cube.mdx index 52a007fdcddb7..33992d71c8106 100644 --- a/docs/content/Schema/Reference/cube.mdx +++ b/docs/content/Schema/Reference/cube.mdx @@ -165,7 +165,7 @@ cubes: Referencing a foreign cube in the `sql` parameter instructs Cube to build an -implicit join to this cube. Using the schema above, we'll use a query as an +implicit join to this cube. Using the data model above, we'll use a query as an example: ```json @@ -240,12 +240,11 @@ cubes: ### <--{"id" : "Parameters"}--> dataSource -Each cube in schema can have its own `dataSource` name to support scenarios -where data should be fetched from multiple databases. The value of the -`dataSource` parameter will be passed to the -[`driverFactory()`][ref-config-driverfactory] function as part of the `context` -parameter. By default, each cube has a `default` value for its `dataSource`; to -override it you can use: +Each cube can have its own `dataSource` name to support scenarios where data +should be fetched from multiple databases. The value of the `dataSource` +parameter will be passed to the [`driverFactory()`][ref-config-driverfactory] +function as part of the `context` parameter. By default, each cube has a +`default` value for its `dataSource`; to override it you can use: @@ -346,8 +345,8 @@ cubes: You can also omit the cube name while defining a cube in JavaScript. This way, Cube doesn't register this cube globally; instead it returns a reference which -you can use while combining cubes. It makes sense to use it for dynamic schema -generation and reusing with `extends`. Previous example without defining +you can use while combining cubes. It makes sense to use it for dynamic data +model generation and reusing with `extends`. Previous example without defining `OrderFacts` cube globally: ```javascript @@ -454,8 +453,8 @@ For example: cube(`OrderFacts`, { sql: `SELECT * FROM orders`, refreshKey: { - every: '30 5 * * 5', - timezone: 'America/Los_Angeles', + every: "30 5 * * 5", + timezone: "America/Los_Angeles", }, }); ``` @@ -720,7 +719,7 @@ or `Function`. See the examples below. ```javascript cube(`OrderFacts`, { sql: `SELECT * FROM orders WHERE ${FILTER_PARAMS.OrderFacts.date.filter( - 'date' + "date" )}`, measures: { @@ -846,7 +845,7 @@ follows: ```javascript cube(`Orders`, { - sql: `SELECT * FROM orders WHERE ${SECURITY_CONTEXT.email.filter('email')}`, + sql: `SELECT * FROM orders WHERE ${SECURITY_CONTEXT.email.filter("email")}`, dimensions: { date: { @@ -861,8 +860,7 @@ cube(`Orders`, { cubes: - name: Orders sql: > - SELECT * FROM orders - WHERE {SECURITY_CONTEXT.email.filter('email')} + SELECT * FROM orders WHERE {SECURITY_CONTEXT.email.filter("email")} dimensions: - name: date sql: date @@ -878,7 +876,7 @@ To ensure filter value presents for all requests `requiredFilter` can be used: ```javascript cube(`Orders`, { sql: `SELECT * FROM orders WHERE ${SECURITY_CONTEXT.email.requiredFilter( - 'email' + "email" )}`, dimensions: { @@ -895,7 +893,7 @@ cubes: - name: Orders sql: > SELECT * FROM orders - WHERE {SECURITY_CONTEXT.email.requiredFilter('email')} + WHERE {SECURITY_CONTEXT.email.requiredFilter("email")} dimensions: - name: date sql: date @@ -920,7 +918,7 @@ use it during your SQL generation. For example: ```javascript cube(`Orders`, { sql: `SELECT * FROM ${ - SECURITY_CONTEXT.type.unsafeValue() === 'employee' ? 'employee' : 'public' + SECURITY_CONTEXT.type.unsafeValue() === "employee" ? "employee" : "public" }.orders`, dimensions: { @@ -937,7 +935,7 @@ cubes: - name: Orders sql: > SELECT * FROM - {SECURITY_CONTEXT.type.unsafeValue() === 'employee' ? 'employee' : 'public'}.orders + {SECURITY_CONTEXT.type.unsafeValue() === "employee" ? "employee" : "public"}.orders dimensions: - name: date sql: date @@ -976,12 +974,12 @@ cube(`visitors`, { dimensions: { created_at_converted: { // do not use in timeDimensions query property - type: 'time', + type: "time", sql: SQL_UTILS.convertTz(`created_at`), }, created_at: { // use in timeDimensions query property - type: 'time', + type: "time", sql: `created_at`, }, }, @@ -1008,7 +1006,7 @@ cubes: ### <--{"id" : "Context Variables"}--> Compile context There's a global `COMPILE_CONTEXT` that captured as -[`RequestContext`][ref-config-req-ctx] at the time of schema compilation. It +[`RequestContext`][ref-config-req-ctx] at the time of data model compilation. It contains `securityContext` and any other variables provided by [`extendContext`][ref-config-ext-ctx]. diff --git a/docs/content/Schema/Reference/dimensions.mdx b/docs/content/Schema/Reference/dimensions.mdx index a7f8825ffc783..e22eb8c16cadc 100644 --- a/docs/content/Schema/Reference/dimensions.mdx +++ b/docs/content/Schema/Reference/dimensions.mdx @@ -71,7 +71,7 @@ The following static `label` example will create a `size` dimension with values ```javascript -cube('Products', { +cube("Products", { dimensions: { size: { type: `string`, @@ -115,7 +115,7 @@ The `label` property can be defined dynamically as an object with a `sql` property in JavaScript models: ```javascript -cube('Products', { +cube("Products", { dimensions: { size: { type: `string`, @@ -152,7 +152,7 @@ You can add details to a dimension's definition via the `description` property: ```javascript -cube('Products', { +cube("Products", { dimensions: { comment: { type: `string`, @@ -182,13 +182,13 @@ Custom metadata. Can be used to pass any information to the frontend. ```javascript -cube('Products', { +cube("Products", { dimensions: { users_count: { sql: `${Users.count}`, type: `number`, meta: { - any: 'value', + any: "value", }, }, }, @@ -225,7 +225,7 @@ parameter to `false`. If you still want `shown` to be `true`, set it manually. ```javascript -cube('Products', { +cube("Products", { dimensions: { id: { sql: `id`, @@ -256,7 +256,7 @@ passed to the [subquery][self-subquery]. ```javascript -cube('Products', { +cube("Products", { dimensions: { users_count: { sql: `${Users.count}`, @@ -289,7 +289,7 @@ default value of `shown` is `true`. ```javascript -cube('Products', { +cube("Products", { dimensions: { comment: { type: `string`, @@ -338,7 +338,7 @@ an advanced concept and you can learn more about it [here][ref-subquery]. ```javascript -cube('Products', { +cube("Products", { dimensions: { users_count: { sql: `${Users.count}`, @@ -370,7 +370,7 @@ order to override default behavior, please use the `title` property: ```javascript -cube('Products', { +cube("Products", { dimensions: { meta_value: { type: `string`, diff --git a/docs/content/Schema/Reference/joins.mdx b/docs/content/Schema/Reference/joins.mdx index 8116ae12fdccd..98e895c6b4209 100644 --- a/docs/content/Schema/Reference/joins.mdx +++ b/docs/content/Schema/Reference/joins.mdx @@ -17,7 +17,7 @@ time. ```javascript -cube('MyCube', { +cube("MyCube", { joins: { TargetCubeName: { relationship: `one_to_one` || `one_to_many` || `many_to_one`, @@ -84,7 +84,7 @@ cubes: joins: - name: products relationship: many_to_one - sql: '{CUBE.id} = {products.order_id}' + sql: "{CUBE.id} = {products.order_id}" ``` @@ -117,9 +117,9 @@ You can use the following types of relationships: -The types of relationships listed above were introduced in v0.32.19 for clarity as -they are commonly used in the data space. The following aliases were used before and -are still valid, so there's no need to update existing data models: +The types of relationships listed above were introduced in v0.32.19 for clarity +as they are commonly used in the data space. The following aliases were used +before and are still valid, so there's no need to update existing data models: - `one_to_one` was known as `has_one` or `hasOne` - `one_to_many` was known as `has_many` or `hasMany` @@ -155,7 +155,7 @@ cubes: joins: - name: profiles relationship: one_to_one - sql: '{users}.id = {profiles.user_id}' + sql: "{users}.id = {profiles.user_id}" ``` @@ -188,7 +188,7 @@ cubes: joins: - name: books relationship: one_to_many - sql: '{authors}.id = {books.author_id}' + sql: "{authors}.id = {books.author_id}" ``` @@ -223,7 +223,7 @@ cubes: joins: - name: customers relationship: many_to_one - sql: '{orders}.customer_id = {customers.id}' + sql: "{orders}.customer_id = {customers.id}" ``` @@ -255,7 +255,7 @@ cubes: joins: - name: customers relationship: many_to_one - sql: '{orders}.customer_id = {customers.id}' + sql: "{orders}.customer_id = {customers.id}" ``` @@ -276,7 +276,7 @@ keys with `Order` to get the correct `Order Amount` sum result. Please note that ```javascript -cube('orders', { +cube("orders", { dimensions: { customer_id: { sql: `id`, @@ -363,7 +363,7 @@ cubes: - name: users dimensions: - name: id - sql: "{CUBE}.user_id || '-' || {CUBE}.signup_week || '-' || {CUBE}.activity_week" + sql: "{CUBE}.user_id || '-' || {CUBE}.signup_week || '-' || {CUBE}.activity_week" type: string primary_key: true ``` @@ -396,7 +396,7 @@ cubes: - name: users dimensions: - name: name - sql: '{CUBE}.name' + sql: "{CUBE}.name" type: string ``` @@ -456,7 +456,7 @@ cubes: - name: a joins: - name: b - sql: '{a}.b_id = {b.id}' + sql: "{a}.b_id = {b.id}" relationship: many_to_one measures: - name: count @@ -465,7 +465,7 @@ cubes: - name: c joins: - name: c - sql: '{b}.c_id = {c.id}' + sql: "{b}.c_id = {c.id}" relationship: many_to_one - name: c diff --git a/docs/content/Schema/Reference/measures.mdx b/docs/content/Schema/Reference/measures.mdx index 6290f0d8094dd..7979dce142f36 100644 --- a/docs/content/Schema/Reference/measures.mdx +++ b/docs/content/Schema/Reference/measures.mdx @@ -99,7 +99,7 @@ drill downs][ref-drilldowns]. ```javascript -cube('Orders', { +cube("Orders", { measures: { revenue: { type: `sum`, @@ -205,7 +205,7 @@ cube(`Orders`, { type: `sum`, sql: `price`, meta: { - any: 'value', + any: "value", }, }, }, @@ -458,7 +458,7 @@ cubes: - name: Orders measures: - name: purchases_to_created_account_ratio - sql: '{purchases} / {Users.count} * 100.0' + sql: "{purchases} / {Users.count} * 100.0" type: number format: percent ``` diff --git a/docs/content/Schema/Reference/pre-aggregations.mdx b/docs/content/Schema/Reference/pre-aggregations.mdx index 25bd7637f589b..732eda47db6f6 100644 --- a/docs/content/Schema/Reference/pre-aggregations.mdx +++ b/docs/content/Schema/Reference/pre-aggregations.mdx @@ -241,7 +241,7 @@ and an `orders_with_users_rollup` pre-aggregation. Note the following: ```javascript cube(`Users`, { - dataSource: 'postgres', + dataSource: "postgres", sql: `SELECT * FROM public.users`, preAggregations: { @@ -270,8 +270,8 @@ cube(`Users`, { }, }); -cube('Orders', { - dataSource: 'mssql', +cube("Orders", { + dataSource: "mssql", sql: `SELECT * FROM orders`, preAggregations: { @@ -346,7 +346,7 @@ cubes: type: number primary_key: true - name: name - sql: '{CUBE}.first_name || {CUBE}.last_name' + sql: "{CUBE}.first_name || {CUBE}.last_name" type: string - name: Orders @@ -366,7 +366,7 @@ cubes: joins: - name: Users relationship: many_to_one - sql: '{CUBE.user_id} = {Users.id}' + sql: "{CUBE.user_id} = {Users.id}" measures: - name: count type: count @@ -399,7 +399,7 @@ directly: ```javascript -cube('Orders', { +cube("Orders", { sql: `SELECT * FROM orders`, preAggregations: { @@ -442,7 +442,8 @@ scenarios where real-time data is required. [Lambda pre-aggregations][ref-caching-lambda-preaggs] can be used to combine data from a data source and a pre-aggregation, or even from multiple -pre-aggregations across different schema-compatible cubes. +pre-aggregations across different cubes that share the same dimensions +and measures. ### <--{"id" : "Parameters"}--> measures @@ -452,7 +453,7 @@ cube][ref-schema-measures] that should be included in the pre-aggregation: ```javascript -cube('Orders', { +cube("Orders", { sql: `SELECT * FROM orders`, measures: { @@ -491,7 +492,7 @@ cube][ref-schema-dimensions] that should be included in the pre-aggregation: ```javascript -cube('Orders', { +cube("Orders", { sql: `SELECT * FROM orders`, dimensions: { @@ -528,13 +529,13 @@ cubes: The `timeDimension` property can be any [`dimension`][ref-schema-dimensions] of type [`time`][ref-schema-types-dim-time]. All other measures and dimensions in -the schema are aggregated This property is an extremely useful tool for +the data model are aggregated. This property is an extremely useful tool for improving performance with massive datasets. ```javascript -cube('Orders', { +cube("Orders", { sql: `SELECT * FROM orders`, measures: { @@ -601,7 +602,7 @@ data by week and persist it to Cube Store. ```javascript -cube('Orders', { +cube("Orders", { sql: `SELECT * FROM orders`, preAggregations: { @@ -691,7 +692,7 @@ The `partitionGranularity` defines the granularity for each ```javascript -cube('Orders', { +cube("Orders", { sql: `SELECT * FROM orders`, preAggregations: { @@ -781,7 +782,7 @@ cubes: - name: main measures: [CUBE.count] refresh_key: - sql: 'SELECT MAX(created_at) FROM orders' + sql: "SELECT MAX(created_at) FROM orders" ``` @@ -860,7 +861,7 @@ cubes: measures: [CUBE.count] refresh_key: every: 1 hour - sql: 'SELECT MAX(created_at) FROM orders' + sql: "SELECT MAX(created_at) FROM orders" ``` @@ -1388,7 +1389,7 @@ cube(`Orders`, { type: `originalSql`, indexes: { timestampIndex: { - columns: ['timestamp'], + columns: ["timestamp"], }, }, }, diff --git a/docs/content/Schema/Reference/schema-execution-environment.mdx b/docs/content/Schema/Reference/schema-execution-environment.mdx index 716faa4f647a6..c0492314e5bf6 100644 --- a/docs/content/Schema/Reference/schema-execution-environment.mdx +++ b/docs/content/Schema/Reference/schema-execution-environment.mdx @@ -9,23 +9,24 @@ redirect_from: - /schema-execution-environment --- -Cube Schema Compiler uses [Node.js VM][nodejs-vm] to execute schema compiler -code. It gives required flexibility allowing transpiling schema files before -they get executed, storing schemas in external databases and executing untrusted -code in a safe manner. Cube Schema JavaScript is standard JavaScript supported -by Node.js starting in version 8 with the following exceptions. +Cube Data Model Compiler uses [Node.js VM][nodejs-vm] to execute data model +compiler code. It gives required flexibility allowing transpiling data model +files before they get executed, storing data models in external databases and +executing untrusted code in a safe manner. Cube data model JavaScript is +standard JavaScript supported by Node.js starting in version 8 with the +following exceptions. ## Require -Being executed in VM data schema, JavaScript code doesn't have access to -[Node.js require][nodejs-require] directly. Instead `require()` is implemented -by Schema Compiler to provide access to other data schema files and to regular -Node.js modules. Besides that, the data schema `require()` can resolve Cube +Being executed in VM, data model JavaScript code doesn't have access to [Node.js +require][nodejs-require] directly. Instead `require()` is implemented by Data +Model Compiler to provide access to other data model files and to regular +Node.js modules. Besides that, the data model `require()` can resolve Cube packages such as `Funnels` unlike standard Node.js `require()`. ## Node.js globals (process.env, console.log and others) -Data schema JavaScript code doesn't have access to any standard Node.js globals +Data model JavaScript code doesn't have access to any standard Node.js globals like `process` or `console`. In order to access `process.env`, utility functions can be added outside the `model/` directory: @@ -38,7 +39,7 @@ exports.tableSchema = () => process.env.TABLE_SCHEMA; **model/cubes/Users.js**: ```javascript -import { tableSchema } from '../tablePrefix'; +import { tableSchema } from "../tablePrefix"; cube(`Users`, { sql: `SELECT * FROM ${tableSchema()}.users`, @@ -49,7 +50,7 @@ cube(`Users`, { ## console.log -Data schema cannot access `console.log` due to a separate [VM +Data models cannot access `console.log` due to a separate [VM instance][nodejs-vm] that runs it. Suppose you find yourself writing complex logic for SQL generation that depends on a lot of external input. In that case, you probably want to introduce a helper service outside of `schema` directory @@ -58,12 +59,12 @@ that you can debug as usual Node.js code. ## Cube globals (cube and others) Cube defines `cube()`, `context()` and `asyncModule()` global variable functions -in order to provide API for schema configuration which aren't normally -accessible outside of Cube schema. +in order to provide API for data model configuration which aren't normally +accessible outside of a Cube data model. ## Import / Export -Data schema JavaScript files are transpiled to convert ES6 `import` and `export` +Data model JavaScript files are transpiled to convert ES6 `import` and `export` expressions to corresponding Node.js calls. In fact `import` is routed to [Require][self-require] method. @@ -87,8 +88,8 @@ Later, you can `import` into the cube, wherever needed: ```javascript // in Users.js -import { TEST_USER_IDS } from './constants'; -import usersSql from './usersSql'; +import { TEST_USER_IDS } from "./constants"; +import usersSql from "./usersSql"; cube(`Users`, { sql: usersSql(`users`), @@ -102,7 +103,7 @@ cube(`Users`, { segments: { excludeTestUsers: { - sql: `${CUBE}.id NOT IN (${TEST_USER_IDS.join(', ')})`, + sql: `${CUBE}.id NOT IN (${TEST_USER_IDS.join(", ")})`, }, }, }); @@ -110,9 +111,9 @@ cube(`Users`, { ## asyncModule -Schemas can be externally stored and retrieved through an asynchronous operation -using the `asyncModule()`. For more information, consult the [Dynamic Schema -Creation][ref-dynamic-schemas] page. +Data models can be externally stored and retrieved through an asynchronous +operation using the `asyncModule()`. For more information, consult the [Dynamic +Schema Creation][ref-dynamic-schemas] page. ## Context symbols transpile diff --git a/docs/content/Schema/Reference/segments.mdx b/docs/content/Schema/Reference/segments.mdx index 2b7513f0a4117..6b1fe2b3c1593 100644 --- a/docs/content/Schema/Reference/segments.mdx +++ b/docs/content/Schema/Reference/segments.mdx @@ -65,8 +65,8 @@ As with other cube member definitions segments can be ```javascript const userSegments = { - sf_users: ['San Francisco', 'CA'], - ny_users: ['New York City', 'NY'], + sf_users: ["San Francisco", "CA"], + ny_users: ["New York City", "NY"], }; cube(`Users`, { diff --git a/docs/content/Schema/Reference/types-and-formats.mdx b/docs/content/Schema/Reference/types-and-formats.mdx index 2a06bab26ae9a..4caef6ad74063 100644 --- a/docs/content/Schema/Reference/types-and-formats.mdx +++ b/docs/content/Schema/Reference/types-and-formats.mdx @@ -22,7 +22,7 @@ below, we create a `string` measure by converting a numerical value to a string: ```javascript -cube('Orders', { +cube("Orders", { measures: { high_or_low: { type: `string`, @@ -51,7 +51,7 @@ below, we create a `time` measure from an existing dimension: ```javascript -cube('Orders', { +cube("Orders", { measures: { last_order: { sql: `MAX(created_at)`, @@ -84,7 +84,8 @@ cubes: ### <--{"id" : "Measure Types"}--> boolean -The `boolean` measure type can be used to condense data into a single boolean value. +The `boolean` measure type can be used to condense data into a single boolean +value. The example below adds an `is_completed` measure which only returns `true` if **all** orders have the `completed` status: @@ -92,7 +93,7 @@ The example below adds an `is_completed` measure which only returns `true` if ```javascript -cube('Orders', { +cube("Orders", { measures: { is_completed: { sql: `BOOL_AND(status = 'completed')`, @@ -123,7 +124,7 @@ Measures][ref-schema-ref-calc-measures]. ```javascript -cube('Orders', { +cube("Orders", { measures: { purchases_ratio: { sql: `${purchases} / ${count} * 100.0`, @@ -152,7 +153,7 @@ expression: ```javascript -cube('Orders', { +cube("Orders", { measures: { ratio: { sql: `sum(${CUBE}.amount) / count(*)`, @@ -187,7 +188,7 @@ count. [Learn more about Drill Downs][ref-drilldowns]. ```javascript -cube('Orders', { +cube("Orders", { measures: { numberOfUsers: { type: `count`, @@ -224,11 +225,11 @@ results in a table column, or interpolated JavaScript expression. ```javascript -cube('Orders', { +cube("Orders", { measures: { uniqueUserCount: { sql: `user_id`, - type: 'countDistinct', + type: "countDistinct", }, }, }); @@ -261,11 +262,11 @@ The `sql` parameter is required and can take any valid SQL expression. ```javascript -cube('Orders', { +cube("Orders", { measures: { uniqueUserCount: { sql: `user_id`, - type: 'countDistinctApprox', + type: "countDistinctApprox", }, }, }); @@ -296,7 +297,7 @@ function. ```javascript -cube('Orders', { +cube("Orders", { measures: { revenue: { sql: `${chargesAmount}`, @@ -320,7 +321,7 @@ cubes: ```javascript -cube('Orders', { +cube("Orders", { measures: { revenue: { sql: `amount`, @@ -344,7 +345,7 @@ cubes: ```javascript -cube('Orders', { +cube("Orders", { measures: { revenue: { sql: `fee * 0.1`, @@ -377,7 +378,7 @@ that results in a numeric table column, or interpolated JavaScript expression. ```javascript -cube('Orders', { +cube("Orders", { measures: { avg_transaction: { sql: `${transaction_amount}`, @@ -405,7 +406,7 @@ Type of measure `min` is calculated as a minimum of values defined in `sql`. ```javascript -cube('Orders', { +cube("Orders", { measures: { date_first_purchase: { sql: `date_purchase`, @@ -433,7 +434,7 @@ Type of measure `max` is calculated as a maximum of values defined in `sql`. ```javascript -cube('Orders', { +cube("Orders", { measures: { date_last_purchase: { sql: `date_purchase`, @@ -462,7 +463,7 @@ Type of measure `runningTotal` is calculated as summation of values defined in ```javascript -cube('Orders', { +cube("Orders", { measures: { total_subscriptions: { sql: `subscription_amount`, @@ -495,7 +496,7 @@ see as output. ```javascript -cube('Orders', { +cube("Orders", { measures: { purchase_conversion: { sql: `${purchase}/${checkout}*100.0`, @@ -511,7 +512,7 @@ cubes: - name: Orders measures: - name: purchase_conversion - sql: "{purchase}/{checkout}*100.0" + sql: "{purchase} / {checkout} * 100.0" type: number format: percent ``` @@ -525,7 +526,7 @@ cubes: ```javascript -cube('Orders', { +cube("Orders", { measures: { total_amount: { sql: `amount`, @@ -558,14 +559,15 @@ This section describes the various types that can be assigned to a In order to be able to create time series charts, Cube needs to identify time dimension which is a timestamp column in your database. -You can define several time dimensions in schemas and apply each when creating -charts. Note that type of target column should be `TIMESTAMP`. Please use [this -guide][ref-string-time-dims] if your datetime information is stored as a string. +You can define several time dimensions in data models and apply each when +creating charts. Note that type of target column should be `TIMESTAMP`. Please +use [this guide][ref-string-time-dims] if your datetime information is stored as +a string. ```javascript -cube('Orders', { +cube("Orders", { dimensions: { completed_at: { sql: `completed_at`, @@ -598,7 +600,7 @@ The following model creates a field `full_name` by combining 2 fields: ```javascript -cube('Orders', { +cube("Orders", { dimensions: { full_name: { sql: `CONCAT(${first_name}, ' ', ${last_name})`, @@ -626,7 +628,7 @@ cubes: ```javascript -cube('Orders', { +cube("Orders", { dimensions: { amount: { sql: `amount`, @@ -655,7 +657,7 @@ boolean. For example: ```javascript -cube('Orders', { +cube("Orders", { dimensions: { is_enabled: { sql: `is_enabled`, @@ -684,7 +686,7 @@ it requires to set two fields: latitude and longitude. ```javascript -cube('Orders', { +cube("Orders", { dimensions: { location: { type: `geo`, @@ -723,7 +725,7 @@ cubes: ```javascript -cube('Orders', { +cube("Orders", { dimensions: { image: { sql: `CONCAT('https://img.example.com/id/', ${id})`, @@ -755,7 +757,7 @@ can take any valid SQL expression. ```javascript -cube('Orders', { +cube("Orders", { dimensions: { image: { sql: `id`, @@ -789,7 +791,7 @@ The `sql` parameter is required and can take any valid SQL expression. ```javascript -cube('Orders', { +cube("Orders", { dimensions: { orderLink: { sql: `'http://myswebsite.com/orders/' || id`, @@ -834,7 +836,7 @@ cubes: ```javascript -cube('Orders', { +cube("Orders", { dimensions: { amount: { sql: `amount`, @@ -864,7 +866,7 @@ cubes: ```javascript -cube('Orders', { +cube("Orders", { dimensions: { open_rate: { sql: `COALESCE(100.0 * ${uniq_open_count} / NULLIF(${delivered_count}, 0), 0)`, @@ -881,7 +883,7 @@ cubes: dimensions: - name: open_rate sql: - 'COALESCE(100.0 * {uniq_open_count} / NULLIF({delivered_count}, 0), 0)' + "COALESCE(100.0 * {uniq_open_count} / NULLIF({delivered_count}, 0), 0)" type: number format: percent ``` diff --git a/docs/content/Schema/Reference/view.mdx b/docs/content/Schema/Reference/view.mdx index 703a99c712280..3a9342680828e 100644 --- a/docs/content/Schema/Reference/view.mdx +++ b/docs/content/Schema/Reference/view.mdx @@ -65,7 +65,7 @@ cube through nested joins: ```javascript view(`CompletedOrders`, { - description: 'Count of completed orders', + description: "Count of completed orders", includes: [Orders.completed_count], measures: { diff --git a/docs/content/Workspace/Access Control.mdx b/docs/content/Workspace/Access Control.mdx index fe2b2ac30acdc..dc677c59592bc 100644 --- a/docs/content/Workspace/Access Control.mdx +++ b/docs/content/Workspace/Access Control.mdx @@ -5,12 +5,14 @@ category: Workspace menuOrder: 5 --- -As an account administrator, you can define roles with specific -permissions for resources and apply those roles to users within the account. +As an account administrator, you can define roles with specific permissions for +resources and apply those roles to users within the account. -Access control is available in Cube Cloud on [Enterprise](https://cube.dev/pricing) tier. [Contact us](https://cube.dev/contact) for details. +Access control is available in Cube Cloud on +[Enterprise](https://cube.dev/pricing) tier. +[Contact us](https://cube.dev/contact) for details. @@ -39,7 +41,7 @@ description for the role, then click "Add Policy" and select either "Deployment" or "Global" for this policy's scope. Deployment policies apply to deployment-level functionality, such as the -Playground and Schema Editor. Global policies apply to account-level +Playground and Data Model editor. Global policies apply to account-level functionality, such as Alerts and Billing. Once the policy scope has been selected, you can restrict which actions this role can perform by selecting "Specific" and using the dropdown to select specific actions. diff --git a/docs/content/Workspace/CLI.mdx b/docs/content/Workspace/CLI.mdx index 8447a0e9e0a23..d563a7f6cf19f 100644 --- a/docs/content/Workspace/CLI.mdx +++ b/docs/content/Workspace/CLI.mdx @@ -9,7 +9,7 @@ The Cube command line interface (CLI) is used for various Cube workflows. It could help you in areas such as: - Creating a new Cube service; -- Generating a schema based on your database tables; +- Generating a data model based on your database tables; ## Quickstart @@ -42,8 +42,8 @@ npx cubejs-cli create hello-world -d postgres Once run, the `create` command will create a new project directory that contains the scaffolding for your new Cube project. This includes all the files necessary to spin up the Cube backend, example frontend code for displaying the results of -Cube queries in a React app, and some example schema files to highlight the -format of the Cube Data Schema layer. +Cube queries in a React app, and some example data model files to highlight the +format of the Cube Data Model layer. The `.env` file in this project directory contains placeholders for the relevant database credentials. For MySQL, Redshift, and PostgreSQL, you'll need to fill diff --git a/docs/content/Workspace/Cube-IDE.mdx b/docs/content/Workspace/Cube-IDE.mdx index bcde1422a98e7..3bbdfe05b602e 100644 --- a/docs/content/Workspace/Cube-IDE.mdx +++ b/docs/content/Workspace/Cube-IDE.mdx @@ -8,18 +8,18 @@ redirect_from: - /cloud/dev-tools/cube-ide --- -With the Cube IDE, you can write and test and your Cube data schemas from your +With the Cube IDE, you can write and test and your Cube data models from your browser. -Cube IDE is available in Cube Cloud on [all tiers](https://cube.dev/pricing). +Cube IDE is available in Cube Cloud on [all tiers](https://cube.dev/pricing). -Cube Cloud can create branch-based development API instances to quickly -test changes in the data schema in your frontend applications before pushing -them into production. +Cube Cloud can create branch-based development API instances to quickly test +changes in the data model in your frontend applications before pushing them into +production. @@ -28,11 +28,11 @@ them into production. In development mode, you can safely make changes to your project without affecting production deployment. Development mode uses a separate Git branch and allows testing your changes in Playground or via a separate API endpoint -specific to this branch. This development API hot-reloads your schema changes, -allowing you to quickly test API changes from your applications. +specific to this branch. This development API hot-reloads your data model +changes, allowing you to quickly test API changes from your applications. -To enter development mode, navigate to the **Schema** page and click **Enter -Development Mode**. +To enter development mode, navigate to the Data Model screen and +click Enter Development Mode.
You can exit development mode by clicking Exit in the grey banner. If -you've been editing a schema and navigate away, Cube Cloud will warn you if +you've been editing a data model and navigate away, Cube Cloud will warn you if there are any unsaved changes: ![Unsaved changes warning modal|690x431](https://ucarecdn.com/67b8e943-0043-4398-84fc-91d83765ed10/) @@ -91,8 +91,8 @@ want to delete, then open the switcher and click Remove Branch: ## Generating models Cube Cloud supports generating models from a data source after the initial -deployment creation. The Generate Schema on the Schema -page will let you re-generate models from your source database, or alternatively -add rollups to existing schemas: +deployment creation. The Generate Data Model on the Data +Model page will let you re-generate models from your source database, or +alternatively add rollups to existing data models: -![Generate rollups or schema modal|690x428](https://ucarecdn.com/b83b8b21-7894-4363-9b0e-5f390b50cd6f/) +![Generate rollups or data model modal|690x428](https://ucarecdn.com/b83b8b21-7894-4363-9b0e-5f390b50cd6f/)