diff --git a/samples/databases/wide-world-importers/README.md b/samples/databases/wide-world-importers/README.md index 4b127bd769..04cb77b720 100644 --- a/samples/databases/wide-world-importers/README.md +++ b/samples/databases/wide-world-importers/README.md @@ -1,6 +1,6 @@ # WideWorldImporters Sample Database for SQL Server and Azure SQL Database -WideWorldImporters is a sample for SQL Server and Azure SQL Database. It showcases best practices in database design, as well as how to best leverage SQL Server features in a database. +WideWorldImporters is a sample for SQL Server and Azure SQL Database. It showcases database design, as well as how to best leverage SQL Server features in a database. WideWorldImporters is a wholesale company. Transactions and real-time analytics are performed in the database WideWorldImporters. The database WideWorldImportersDW is an OLAP database, focused on analytics. diff --git a/samples/databases/wide-world-importers/documentation/README.md b/samples/databases/wide-world-importers/documentation/README.md new file mode 100644 index 0000000000..72f6c331b7 --- /dev/null +++ b/samples/databases/wide-world-importers/documentation/README.md @@ -0,0 +1,7 @@ +# Documentation for the WideWorldImporters Sample Database + +This folder contains documentation for the sample. + +Start with [root.md](root.md) + +Note that these contents will most likely be migrated to MSDN. diff --git a/samples/databases/wide-world-importers/documentation/root.md b/samples/databases/wide-world-importers/documentation/root.md new file mode 100644 index 0000000000..da340e3f28 --- /dev/null +++ b/samples/databases/wide-world-importers/documentation/root.md @@ -0,0 +1,65 @@ +# Wide World Importers Sample for SQL Server and Azure SQL Database + +Wide World Importers is a comprehensive database sample that both illustrates database design, and illustrates how SQL Server features can be leveraged in an application. + +Note that the sample is meant to be representative of a typical database. It does not include every feature of SQL Server. The design of the database follows one common set of standards, but there are many ways one might build a database. + +The source code for the sample can be found on the SQL Server Samples GitHub repository: +[wide-world-importers](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/wide-world-importers). + +The latest released version of the sample: +[wide-world-importers-v0.1](https://github.com/Microsoft/sql-server-samples/releases/tag/wide-world-importers-v0.1) + +The documentation for the sample is organized as follows: + +## Overview + +__[Wide World Importers Overview](wwi-overview.md)__ + +Overview of the sample company Wide World Importers, and the workflows addressed by the sample. + +## Main OLTP Database WideWorldImporters + +__[WideWorldImporters Installation and Configuration](wwi-oltp-htap-installation.md)__ + +Instructions for the installation and configuration of the core database WideWorldImporters that is used for transaction processing (OLTP - OnLine Transaction Processing) and operational analytics (HTAP - Hybrid Transactional/Analytical Processing). + +__[WideWorldImporters Database Catalog](wwi-oltp-htap-catalog.md)__ + +Description of the schemas and tables used in the WideWorldImporters database. + +__[WideWorldImporters Use of SQL Server Features and Capabilities](wwi-oltp-htap-sql-features.md)__ + +Describes how WideWorldImporters leverages core SQL Server features. + +__[WideWorldImporters Sample Queries](wwi-oltp-htap-sample-queries.md)__ + +Sample queries for the WideWorldImporters database. + +## Data Warehousing and Analytics Database WideWorldImportersDW + +__[WideWorldImportersDW Installation and Configuration](wwi-olap-installation.md)__ + +Instructions for the installation and configuration of the OLAP database WideWorldImportersDW. + +__[WideWorldImportersDW OLAP Database Catalog](wwi-olap-catalog.md)__ + +Description of the schemas and tables used in the WideWorldImportersDW database, which is the sample database for data warehousing and analytics processing (OLAP). + +__[WideWorldImporters ETL Workflow](wwi-etl.md)__ + +Workflow for the ETL (Extract, Transform, Load) process that migrates data from the transactional database WideWorldImporters to the data warehouse WideWorldImportersDW. + +__[WideWorldImportersDW Use of SQL Server Features and Capabilities](wwi-olap-sql-features.md)__ + +Describes how the WideWorldImportersDW leverages SQL Server features for analytics processing. + +__[WideWorldImportersDW OLAP Sample Queries](wwi-olap-sample-queries.md)__ + +Sample analytics queries leveraging the WideWorldImportersDW database. + +## Data generation + +__[WideWorldImporters Data Generation](wwi-data-generation.md)__ + +Describes how additional data can be generated in the sample database, for example inserting sales and purchase data up to the current date. diff --git a/samples/databases/wide-world-importers/documentation/wwi-data-generation.md b/samples/databases/wide-world-importers/documentation/wwi-data-generation.md new file mode 100644 index 0000000000..b645075b52 --- /dev/null +++ b/samples/databases/wide-world-importers/documentation/wwi-data-generation.md @@ -0,0 +1,35 @@ +# WideWorldImporters Data Generation + +The released versions of the WideWorldImporters and WideWorldImportersDW databases contains data starting January 1st 2013, up to the day these databases were generated. + +If the sample databases are used at a later date, for demonstration or illustration purposes, it may be beneficial to include more recent sample data in the database. + +## Data Generation in WideWorldImporters + +To generate sample data up to the current date, follow these steps: + +1. If you have not yet done so, install a clean version of the WideWorldImporters database. For installation instructions, [WideWorldImporters Installation and Configuration](wwi-oltp-htap-installation.md). +2. Execute the following statement in the database: + + EXEC WideWorldImporters.DataLoadSimulation.PopulateDataToCurrentDate + @AverageNumberOfCustomerOrdersPerDay = 60, + @SaturdayPercentageOfNormalWorkDay = 50, + @SundayPercentageOfNormalWorkDay = 0, + @IsSilentMode = 1, + @AreDatesPrinted = 1; + +This statement adds sample sales and purchase data in the database, up to the current date. It outputs the progress of the data generation day-by-day. It will take rougly 10 minutes for every year that needs data. Note that there are some differences in the data generated between runs, since there is a random factor in the data generation. + +To increase or decrease the amount of data generated, in terms of orders per day, change the value for the parameter `@AverageNumberOfCustomerOrdersPerDay`. The parameters `@SaturdayPercentageOfNormalWorkDay` and `@SundayPercentageOfNormalWorkDay` are used to determine the order volume for weekend days. + +## Importing Data in WideWorldImportersDW + +To import sample data up to the current date in the OLAP database WideWorldImportersDW, follow these steps: + +1. Execute the data generation logic in the WideWorldImporters OLTP database, using the steps above. +2. If you have not yet done so, install a clean version of the WideWorldImportersDW database. For installation instructions, [WideWorldImporters Installation and Configuration](wwi-olap-installation.md). +3. Reseed the OLAP database by executing the following statement in the database: + + EXECUTE [Application].Configuration_ReseedETL + +4. Run the SSIS package **Daily ETL.ispac** to import the data into the OLAP database. For instructions on how to run the ETL job, see [WideWorldImporters ETL Workflow](wwi-etl.md). diff --git a/samples/databases/wide-world-importers/documentation/wwi-etl.md b/samples/databases/wide-world-importers/documentation/wwi-etl.md new file mode 100644 index 0000000000..a88e18a19d --- /dev/null +++ b/samples/databases/wide-world-importers/documentation/wwi-etl.md @@ -0,0 +1,58 @@ +# WideWorldImporters ETL Workflow + +The ETL package WWI_Integration is used to migrate data from the WideWorldImporters database to the WideWorldImportersDW database as the data changes. The package is run periodically (most commonly daily). + +## Overview + +The design of the package uses SQL Server Integration Services (SSIS) to orchestrate bulk T-SQL operations (rather than as separate transformations within SSIS) to ensure high performance. + +Dimensions are loaded first, followed by Fact tables. The package can be re-run at any time after a failure. + +The workflow is as follows: + +![Alt text](/media/wide-world-importers-etl-workflow.png "WideWorldImporters ETL Workflow") + +It starts with an expression task that works out the appropriate cutoff time. This time is the current time less a few seconds. (This is more robust than requesting data right to the current time). It then truncates any milliseconds from the time. + +The main processing starts by populating the Date dimension table. It ensures that all dates for the current year have been populated in the table. + +After this, a series of data flow tasks loads each dimension, then each fact. + +## Prerequisites + +- SQL Server 2016 (or higher) with the databases WideWorldImporters and WideWorldImportersDW. These can be on the same or different instances of SQL Server. +- SQL Server Management Studio (SSMS) +- SQL Server 2016 Integration Services (SSIS). + - Make sure you have created an SSIS Catalog. If not, right click **Integration Services** in SSMS Object Explorer, and choose **Add Catalog**. Follow the defaults. It will ask you to enable sqlclr and provide a password. + + +## Download + +The latest release of the sample: + +[wide-world-importers-v0.1](https://github.com/Microsoft/sql-server-samples/releases/tag/wide-world-importers-v0.1) + +Download the SSIS package file **Daily ETL.ispac**. + +Source code to recreate the sample database is available from the following location. + +[wide-world-importers](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/wide-world-importers/wwi-integration-etl) + +## Install + +1. Deploy the SSIS package. + - Open the "Daily ETL.ispac" package from Windows Explorer. This will launch the Integration Services Deployment Wizard. + - Under "Select Source" follow the default Project Deployment, with the path pointing to the "Daily ETL.ispac" package. + - Under "Select Destination" enter the name of the server that hosts the SSIS catalog. + - Select a path under the SSIS catalog, for example under a new folder "WideWorldImporters". + - Finalize the wizard by clicking Deploy. + +2. Create a SQL Server Agent job for the ETL process. + - In SSMS, right-click "SQL Server Agent" and select New->Job. + - Pick a name, for example "WideWorldImporters ETL". + - Add a Job Step of type "SQL Server Integration Services Package". + - Select the server with the SSIS catalog, and select the "Daily ETL" package. + - Under Configuration->Connection Managers ensure the connections to the source and target are configured correctly. The default is to connect to the local instance. + - Click OK to create the Job. + +3. Execute or schedule the Job. diff --git a/samples/databases/wide-world-importers/documentation/wwi-olap-catalog.md b/samples/databases/wide-world-importers/documentation/wwi-olap-catalog.md new file mode 100644 index 0000000000..09e81ffd35 --- /dev/null +++ b/samples/databases/wide-world-importers/documentation/wwi-olap-catalog.md @@ -0,0 +1,47 @@ +# WideWorldImportersDW OLAP Database Catalog + +The WideWorldImportersDW database is used for data warehousing and analytical processing. The transactional data about sales and purchases is generated in the WideWorldImporters database, and loaded into the WideWorldImportersDW database using a [daily ETL process](wwi-etl.md). + +The data in WideWorldImportersDW thus mirrors the data in WideWorldImporters, but the tables are organized differently. While WideWorldImporters has a traditional normalized schema, WideWorldImportersDW uses the [star schema](https://wikipedia.org/wiki/Star_schema) approach for its table design. Besides the fact and dimension tables, the database includes a number of staging tables that are used in the ETL process. + +## Schemas + +The different types of tables are organized in three schemas. + +|Schema|Description| +|-----------------------------|---------------------| +|Dimension|Dimension tables.| +|Fact|Fact tables.| +|Integration|Staging tables and other objects needed for ETL.| + +## Tables + +The dimension and fact tables are listed below. The tables in the Integration schema are used only for the ETL process, and are not listed. + +### Dimension tables + +WideWorldImportersDW has the following dimension tables. The description includes the relationship with the source tables in the WideWorldImporters database. + +|Table|Source tables| +|-----------------------------|---------------------| +|City|`Application.Cities`, `Application.StateProvinces`, `Application.Countries`.| +|Customer|`Sales.Customers`, `Sales.BuyingGroups`, `Sales.CustomerCategories`.| +|Date|New table with information about dates, including financial year (based on November 1st start for financial year).| +|Employee|`Application.People`.| +|StockItem|`Warehouse.StockItems`, `Warehouse.Colors`, `Warehouse.PackageType`.| +|Supplier|`Purchasing.Suppliers`, `Purchasing.SupplierCategories`.| +|PaymentMethod|`Application.PaymentMethods`.| +|TransactionType|`Application.TransactionTypes`.| + +### Fact tables + +WideWorldImportersDW has the following dimension tables. The description includes the relationship with the source tables in the WideWorldImporters database, as well as the classes of analytics/reporting queries each fact table is typically used with. + +|Table|Source tables|Sample Analytics| +|-----------------------------|---------------------| +|Order|`Sales.Orders` and `Sales.OrderLines`|Sales people, picker/packer productivity, and on time to pick orders. In addition, low stock situations leading to back orders.| +|Sale|`Sales.Invoices` and `Sales.InvoiceLines`|Sales dates, delivery dates, profitability over time, profitability by sales person.| +|Purchase|`Purchasing.PurchaseOrderLines`|Expected vs actual lead times| +|Transaction|`Sales.CustomerTransactions` and `Purchasing.SupplierTransactions`|Measuring issue dates vs finalization dates, and amounts.| +|Movement|`Warehouse.StockTransactions`|Movements over time.| +|Stock Holding|`Warehouse.StockItemHoldings`|On-hand stock levels and value| diff --git a/samples/databases/wide-world-importers/documentation/wwi-olap-installation.md b/samples/databases/wide-world-importers/documentation/wwi-olap-installation.md new file mode 100644 index 0000000000..60b6108bc3 --- /dev/null +++ b/samples/databases/wide-world-importers/documentation/wwi-olap-installation.md @@ -0,0 +1,50 @@ +# WideWorldImportersDW Installation and Configuration + + +- [SQL Server 2016](https://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2016) (or higher) or [Azure SQL Database](https://azure.microsoft.com/services/sql-database/). To use the Full version of the sample, use SQL Server Evaluation/Developer/Enterprise Edition. +- [SQL Server Management Studio](https://msdn.microsoft.com/library/mt238290.aspx). For the best results use the April 2016 preview or later. + +## Download + +The latest release of the sample: + +[wide-world-importers-v0.1](https://github.com/Microsoft/sql-server-samples/releases/tag/wide-world-importers-v0.1) + +Download the sample WideWorldImportersDW database backup/bacpac that corresponds to your edition of SQL Server or Azure SQL Database. + +Source code to recreate the sample database is available from the following location. Note that data population is based on ETL from the OLTP database (WideWorldImporters): + +[wide-world-importers](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/wide-world-importers/wwi-dw-database-scripts) + +## Install + + +### SQL Server + +To restore a backup to a SQL Server instance, you can use Management Studio. +1. Open SQL Server Management Studio and connect to the target SQL Server instance. +2. Right-click on the **Databases** node, and select **Restore Database**. +3. Select **Device** and click on the button **...** +4. In the dialog **Select backup devices**, click **Add**, navigate to the database backup in the filesystem of the server, and select the backup. Click **OK**. +5. If needed, change the target location for the data and log files, in the **Files** pane. Note that it is best practice to place data and log files on different drives. +6. Click **OK**. This will initiate the database restore. After it completes, you will have the database WideWorldImporters installed on your SQL Server instance. + +### Azure SQL Database + +To import a bacpac into a new SQL Database, you can use Management Studio. +1. (optional) If you do not yet have a SQL Server in Azure, navigate to the [Azure portal](https://portal.azure.com/) and create a new SQL Database. In the process of create a database, you will create a server. Make note of the server. + - See [this tutorial](https://azure.microsoft.com/documentation/articles/sql-database-get-started/) to create a database in minutes +2. Open SQL Server Management Studio and connect to your server in Azure. +3. Right-click on the **Databases** node, and select **Import Data-Tier Application**. +4. In the **Import Settings** select **Import from local disk** and select the bacpac of the sample database from your file system. +5. Under **Database Settings** change the database name to *WideWorldImportersDW* and select the target edition and service objective to use. +6. Click **Next** and **Finish** to kick off deployment. It will take a few minutes to complete. When specifying a service objective lower than S2 it may take longer. + +## Configuration + +The sample database can make use of PolyBase to query files in Hadoop or Azure blob storage. However, that feature is not installed by default with SQL Server - you need to select it during SQL Server setup. Therefore, a post-installation step is required. + +1. In SQL Server Management Studio, connect to the WideWorldImportersDW database and open a new query window. +2. Run the following T-SQL command to enable the use of PolyBase in the database: + + EXECUTE [Application].[Configuration_ApplyPolyBase] diff --git a/samples/databases/wide-world-importers/documentation/wwi-olap-sample-queries.md b/samples/databases/wide-world-importers/documentation/wwi-olap-sample-queries.md new file mode 100644 index 0000000000..171bbe7287 --- /dev/null +++ b/samples/databases/wide-world-importers/documentation/wwi-olap-sample-queries.md @@ -0,0 +1,5 @@ +# WideWorldImportersDW OLAP Sample Queries + +Refer to the sample-scripts.zip file that is included with the release of the sample, or refer to the source code: + +[wide-world-importers/sample-scripts](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/wide-world-importers/sample-scripts) diff --git a/samples/databases/wide-world-importers/documentation/wwi-olap-sql-features.md b/samples/databases/wide-world-importers/documentation/wwi-olap-sql-features.md new file mode 100644 index 0000000000..77c687d547 --- /dev/null +++ b/samples/databases/wide-world-importers/documentation/wwi-olap-sql-features.md @@ -0,0 +1,95 @@ +# WideWorldImportersDW Use of SQL Server Features and Capabilities + +WideWorldImportersDW is designed to showcase many of the key features of SQL Server that are suitable for data warehousing and analytics. The following is a list of SQL Server features and capabilities, and a description of how they are used in WideWorldImportersDW. + +## PolyBase + +[Applies to SQL Server (2016 and later)] + +PolyBase is used to combine sales information from WideWorldImportersDW with a public data set about demographics to understand which cities might be of interest for further expansion of sales. + +To enable the use of PolyBase in the sample database, make sure it is installed, and run the following statement in the database: + + EXEC [Application].[Configuration_ApplyPolybase] + +This will create an external table `dbo.CityPopulationStatistics` that references a public data set that contains population data for cities in the United States, hosted in Azure blob storage. The following query returns the data from that external data set: + + SELECT CityID, StateProvinceCode, CityName, YearNumber, LatestRecordedPopulation FROM dbo.CityPopulationStatistics; + +To understand which cities might be of interest for further expansion, the following query looks at the growth rate of cities, and returns the top 100 largest cities with significant growth, and where Wide World Importers does not have a sales presence. The query involves a join between the remote table `dbo.CityPopulationStatistics` and the local table `Dimension.City`, and a filter involving the local table `Fact.Sales`. + + WITH PotentialCities + AS + ( + SELECT cps.CityName, + cps.StateProvinceCode, + MAX(cps.LatestRecordedPopulation) AS PopulationIn2016, + (MAX(cps.LatestRecordedPopulation) - MIN(cps.LatestRecordedPopulation)) * 100.0 + / MIN(cps.LatestRecordedPopulation) AS GrowthRate + FROM dbo.CityPopulationStatistics AS cps + WHERE cps.LatestRecordedPopulation IS NOT NULL + AND cps.LatestRecordedPopulation <> 0 + GROUP BY cps.CityName, cps.StateProvinceCode + ), + InterestingCities + AS + ( + SELECT DISTINCT pc.CityName, + pc.StateProvinceCode, + pc.PopulationIn2016, + FLOOR(pc.GrowthRate) AS GrowthRate + FROM PotentialCities AS pc + INNER JOIN Dimension.City AS c + ON pc.CityName = c.City + WHERE GrowthRate > 2.0 + AND NOT EXISTS (SELECT 1 FROM Fact.Sale AS s WHERE s.[City Key] = c.[City Key]) + ) + SELECT TOP(100) CityName, StateProvinceCode, PopulationIn2016, GrowthRate + FROM InterestingCities + ORDER BY PopulationIn2016 DESC; + +## Clustered Columnstore Indexes + +(Full version of the sample) + +Clustered Columnstore Indexes (CCI) are used with all the fact tables, to reduce storage footprint and improve query performance. With the use of CCI, the base storage for the fact tables uses column compression. + +Nonclustered indexes are used on top of the clustered columnstore index, to facilitate primary key and foreign key constraints. These constraints were added out of an abundance of caution - the ETL process sources the data from the WideWorldImporters database, which has constraints to enforce integrity. Removing primary and foreign key constraints, and their supporting indexes, would reduce the storage footprint of the fact tables. + +**Data size** + +The sample database has limited data size, to make it easy to download and install the sample. However, to see the real performance benefits of columnstore indexes, you would want to use a larger data set. + +You can run the following statement to increase the size of the `Fact.Sales` table by inserting another 12 million rows of sample data. These rows are all inserted for the year 2012, such that there is no interference with the ETL process. + + EXECUTE [Application].[Configuration_PopulateLargeSaleTable] + +This statement will take around 5 minutes to run. To insert more than 12 million rows, pass the desired number of rows to insert as a parameter to this stored procedure. + +To compare query performance with and without columnstore, you can drop and/or recreate the clustered columnstore index. + +To drop the index: + + DROP INDEX [CCX_Fact_Order] ON [Fact].[Order] + +To recreate: + + CREATE CLUSTERED COLUMNSTORE INDEX [CCX_Fact_Order] ON [Fact].[Order] + +## Partitioning + +(Full version of the sample) + +Data size in a Data Warehouse can grow very large. Therefore it is best practice to use partitioning to manage the storage of the large tables in the database. + +All of the larger fact tables are partitioned by year. The only exception is `Fact.Stock Holdings`, which is not date-based and has limited data size compared with the other fact tables. + +The partition function used for all partitioned tables is `PF_Date`, and the partition scheme being used is `PS_Date`. + +## In-Memory OLTP + +(Full version of the sample) + +WideWorldImportersDW uses SCHEMA_ONLY memory-optimized tables for the staging tables. All `Integration.`\*`_Staging` tables are SCHEMA_ONLY memory-optimized tables. + +The advantage of SCHEMA_ONLY tables is that they are not logged, and do not require any disk access. This improves the performance of the ETL process. Since these tables are not logged, their contents are lost if there is a failure. However, the data source is still available, so the ETL process can simply be restarted if a failure occurs. diff --git a/samples/databases/wide-world-importers/documentation/wwi-oltp-htap-catalog.md b/samples/databases/wide-world-importers/documentation/wwi-oltp-htap-catalog.md new file mode 100644 index 0000000000..75496ec47a --- /dev/null +++ b/samples/databases/wide-world-importers/documentation/wwi-oltp-htap-catalog.md @@ -0,0 +1,131 @@ +# WideWorldImporters Database Catalog + +The WideWorldImporters database contains all the transaction information and daily data for sales and purchases, as well as sensor data for vehicles and cold rooms. + +## Schemas + +WideWorldImporters uses schemas for different purposes, such as storing data, defining how users can access the data, and providing objects for data warehouse development and integration. + +### Data schemas + +These schemas contain the data. A number of tables are needed by all other schemas and are located in the Application schema. + +|Schema|Description| +|-----------------------------|---------------------| +|Application|Application-wide users, contacts, and parameters. This also contains reference tables with data that is used by multiple schemas| +|Purchasing|Stock item purchases from suppliers and details about suppliers.| +|Sales|Stock item sales to retail customers, and details about customers and sales people. | +|Warehouse|Stock item inventory and transactions.| + +### Secure-access schemas + +These schemas are used for external applications that are not allowed to access the data tables directly. They contain views and stored procedures used by external applications. + +|Schema|Description| +|-----------------------------|---------------------| +|Website|All access to the database from the company website is through this schema.| +|Reports|All access to the database from Reporting Services reports is through this schema.| +|PowerBI|All access to the database from the Power BI dashboards via the Enterprise Gateway is through this schema.| + +Note that the Reports and PowerBI schemas are not used in the initial release of the sample database. However, all Reporting Services and Power BI samples built on top of this database are encouraged to use these schemas. + +### Development schemas + +Special-purpose schemas + +|Schema|Description| +|-----------------------------|---------------------| +|Integration|Objects and procedures required for data warehouse integration (i.e. migrating the data to the WideWorldImportersDW database).| +|Sequences|Holds sequences used by all tables in the application.| + +## Tables + +All tables in the database are in the data schemas. + +### Application Schema + +Details of parameters and people (users and contacts), along with common reference tables (common to multiple other schemas). + +|Table|Description| +|-----------------------------|---------------------| +|SystemParameters|Contains system-wide configurable parameters.| +|People|Contains user names, contact information, for all who use the application, and for the people that the Wide World Importers deals with at customer organizations. This includes staff, customers, suppliers, and any other contacts. For people who have been granted permission to use the system or website, the information includes login details.| +|Cities|There are many addresses stored in the system, for people, customer organization delivery addresses, pickup addresses at suppliers, etc. Whenever an address is stored, there is a reference to a city in this table. There is also a spatial location for each city.| +|StateProvinces|Cities are part of states or provinces. This table has details of those, including spatial data describing the boundaries each state or province.| +|Countries|States or Provinces are part of countries. This table has details of those, including spatial data describing the boundaries of each country.| +|DeliveryMethods|Choices for delivering stock items (e.g., truck/van, post, pickup, courier, etc.)| +|PaymentMethods|Choices for making payments (e.g., cash, check, EFT, etc.)| +|TransactionTypes|Types of customer, supplier, or stock transactions (e.g., invoice, credit note, etc.)| + +### Purchasing Schema + +Details of suppliers and of stock item purchases. + +|Table|Description| +|-----------------------------|---------------------| +|Suppliers|Main entity table for suppliers (organizations)| +|SupplierCategories|Categories for suppliers (e.g., novelties, toys, clothing, packaging, etc.)| +|SupplierTransactions|All financial transactions that are supplier-related (invoices, payments)| +|PurchaseOrders|Details of supplier purchase orders| +|PurchaseOrderLines|Detail lines from supplier purchase orders| + +  +### Sales Schema + +Details of customers, salespeople, and of stock item sales. + +|Table|Description| +|-----------------------------|---------------------| +|Customers|Main entity tables for customers (organizations or individuals)| +|CustomerCategories|Categories for customers (ie novelty stores, supermarkets, etc.)| +|BuyingGroups|Customer organizations can be part of groups that exert greater buying power| +|CustomerTransactions|All financial transactions that are customer-related (invoices, payments)| +|SpecialDeals|Special pricing. This can include fixed prices, discount in dollars or discount percent.| +|Orders|Detail of customer orders| +|OrderLines|Detail lines from customer orders| +|Invoices|Details of customer invoices| +|InvoiceLines|Detail lines from customer invoices| + +### Warehouse Schema + +Details of stock items, their holdings and transactions. + +|Table|Description| +|-----------------------------|---------------------| +|StockItems|Main entity table for stock items| +|StockItemHoldings|Non-temporal columns for stock items. These arefrequently updated columns.| +|StockGroups|Groups for categorizing stock items (e.g., novelties, toys, edible novelties, etc.)| +|StockItemStockGroups|Which stock items are in which stock groups (many to many)| +|Colors|Stock items can (optionally) have colors| +|PackageTypes|Ways that stock items can be packaged (e.g., box, carton, pallet, kg, etc.| +|StockItemTransactions|Transactions covering all movements of all stock items (receipt, sale, write-off)| +|VehicleTemperatures|Regularly recorded temperatures of vehicle chillers| +|ColdRoomTemperatures|Regularly recorded temperatures of cold room chillers| + + +## Design considerations + +Database design is subjective and there is no right or wrong way to design a database. The schemas and tables in this database show ideas for how you can design your own database. + +### Schema design + +WideWorldImporters uses a small number of schemas so that it is easy to understand the database system and demonstrate database principles. + +Wherever possible, the database collocates tables that are commonly queried together into the same schema to minimize join complexity. + +The database schema has been code-generated based on a series of metadata tables in another database WWI_Preparation. This gives WideWorldImporters a very high degree of design consistency, naming consistency, and completeness. For details on how the schema has been generated see the source code: [wide-world-importers/wwi-database-scripts](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/wide-world-importers/wwi-database-scripts) + +### Table design + +- All tables have single column primary keys for join simplicity. +- All schemas, tables, columns, indexes, and check constraints have a Description extended property that can be used to identify the purpose of the object or column. Memory-optimized tables are an exception to this since they don’t currently support extended properties. +- All foreign keys are automatically indexed unless there is another non-clustered index that has the same left-hand component. +- Auto-numbering in tables is based on sequences. These sequences are easier to work with across linked servers and similar environments than IDENTITY columns. Memory-optimized tables use IDENTITY columns since they don’t support in SQL Server 2016. +- A single sequence (TransactionID) is used for these tables: CustomerTransactions, SupplierTransactions, and StockItemTransactions. This demonstrates how a set of tables can have a single sequence. +- Some columns have appropriate default values. + +### Security schemas + +For security, WideWorldImporters does not allow external applications to access data schemas directly. To isolate access, WideWorldImporters uses security-access schemas that do not hold data, but contain views and stored procedures. External applications use the security schemas to retrieve the data that they are allowed to view. This way, users can only run the views and stored procedures in the secure-access schemas + +For example, this sample includes Power BI dashboards. An external application accesses these Power BI dashboards from the Power BI gateway as a user that has read-only permission on the PowerBI schema. For read-only permission, the user only needs SELECT and EXECUTE permission on the PowerBI schema. A database administrator at WWI assigns these permissions as needed. diff --git a/samples/databases/wide-world-importers/documentation/wwi-oltp-htap-installation.md b/samples/databases/wide-world-importers/documentation/wwi-oltp-htap-installation.md new file mode 100644 index 0000000000..35031721c0 --- /dev/null +++ b/samples/databases/wide-world-importers/documentation/wwi-oltp-htap-installation.md @@ -0,0 +1,51 @@ +# WideWorldImporters Installation and Configuration + +## Prerequisites + +- [SQL Server 2016](https://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2016) (or higher) or [Azure SQL Database](https://azure.microsoft.com/services/sql-database/). To use the Full version of the sample, use SQL Server Evaluation/Developer/Enterprise Edition. +- [SQL Server Management Studio](https://msdn.microsoft.com/library/mt238290.aspx). For the best results use the April 2016 preview or later. + +## Download + +The latest release of the sample: + +[wide-world-importers-v0.1](https://github.com/Microsoft/sql-server-samples/releases/tag/wide-world-importers-v0.1) + +Download the sample WideWorldImporters database backup/bacpac that corresponds to your edition of SQL Server or Azure SQL Database. + +Source code to recreate the sample database is available from the following location. Note that recreating the sample will result in slight differences in the data, since there is a random factor in the data generation: + +[wide-world-importers](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/wide-world-importers/wwi-database-scripts) + +## Install + + +### SQL Server + +To restore a backup to a SQL Server instance, you can use Management Studio. +1. Open SQL Server Management Studio and connect to the target SQL Server instance. +2. Right-click on the **Databases** node, and select **Restore Database**. +3. Select **Device** and click on the button **...** +4. In the dialog **Select backup devices**, click **Add**, navigate to the database backup in the filesystem of the server, and select the backup. Click **OK**. +5. If needed, change the target location for the data and log files, in the **Files** pane. Note that it is best practice to place data and log files on different drives. +6. Click **OK**. This will initiate the database restore. After it completes, you will have the database WideWorldImporters installed on your SQL Server instance. + +### Azure SQL Database + +To import a bacpac into a new SQL Database, you can use Management Studio. +1. (optional) If you do not yet have a SQL Server in Azure, navigate to the [Azure portal](https://portal.azure.com/) and create a new SQL Database. In the process of create a database, you will create a server. Make note of the server. + - See [this tutorial](https://azure.microsoft.com/documentation/articles/sql-database-get-started/) to create a database in minutes +2. Open SQL Server Management Studio and connect to your server in Azure. +3. Right-click on the **Databases** node, and select **Import Data-Tier Application**. +4. In the **Import Settings** select **Import from local disk** and select the bacpac of the sample database from your file system. +5. Under **Database Settings** change the database name to *WideWorldImporters* and select the target edition and service objective to use. +6. Click **Next** and **Finish** to kick off deployment. It will take a few minutes to complete. When specifying a service objective lower than S2 it may take longer. + +## Configuration + +The sample database can make use of Full-Text Indexing. However, that feature is not installed by default with SQL Server - you need to select it during SQL Server setup. Therefore, a post-installation step is required. + +1. In SQL Server Management Studio, connect to the WideWorldImporters database and open a new query window. +2. Run the following T-SQL command to enable the use of Full-Text Indexing in the database: + + EXECUTE Application.Configuration_ApplyFullTextIndexing diff --git a/samples/databases/wide-world-importers/documentation/wwi-oltp-htap-sample-queries.md b/samples/databases/wide-world-importers/documentation/wwi-oltp-htap-sample-queries.md new file mode 100644 index 0000000000..9a08faf06d --- /dev/null +++ b/samples/databases/wide-world-importers/documentation/wwi-oltp-htap-sample-queries.md @@ -0,0 +1,5 @@ +# WideWorldImporters Sample Queries + +Refer to the sample-scripts.zip file that is included with the release of the sample, or refer to the source code: + +[wide-world-importers/sample-scripts](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/wide-world-importers/sample-scripts) diff --git a/samples/databases/wide-world-importers/documentation/wwi-oltp-htap-sql-features.md b/samples/databases/wide-world-importers/documentation/wwi-oltp-htap-sql-features.md new file mode 100644 index 0000000000..de3a551378 --- /dev/null +++ b/samples/databases/wide-world-importers/documentation/wwi-oltp-htap-sql-features.md @@ -0,0 +1,26 @@ +# WideWorldImporters Use of SQL Server Features and Capabilities + +WideWorldImporters is designed to showcase many of the key features of SQL Server, including the latest features introduced in SQL Server 2016. The following is a list of SQL Server features and capabilities, and a description of how they are used in WideWorldImporters. + +|SQL Server feature or capability|Use in WideWorldImporters| +|-----------------------------|---------------------| +|Temporal tables|There are many temporal tables, including all look-up style reference tables and main entities such as StockItems, Customers, and Suppliers. Using temporal tables allows to conveniently keep track of the history of these entities.| +|AJAX calls for JSON|The application frequently uses AJAX calls to query these tables: Persons, Customers, Suppliers, and StockItems. The calls return JSON payloads (i.e. the data that is returned is formatted as JSON data). See, for example, the stored procedure `Website.SearchForCustomers`.| +|JSON property/value bags|A number of tables have columns that hold JSON data to extend the relational data in the table. For example, `Application.SystemParameters` has a column for application settings and `Application.People` has a column to record user preferences. These tables use an `nvarchar(max)` column to record the JSON data, along with a CHECK constraint using the built-in function `ISJSON` to ensure the column values are valid JSON.| +|Row-level security (RLS)|Row Level Security (RLS) is used to limit access to the Customers table, based on role membership. Each sales territory has a role and a user. To see this in action, a script to demonstrate it has been provided.| +|Real-time Operational Analytics|(Full version of the database) The core transactional tables `Sales.InvoiceLines` and `Sales.OrderLines` both have a non-clustered columnstore index to support efficient execution of analytical queries in the transactional database with minimal impact on the operational workload. Running transactions and analytics in the same database is also referred to as [Hybrid Transactional/Analytical Processing (HTAP)](https://wikipedia.org/wiki/Hybrid_Transactional/Analytical_Processing_(HTAP)).| +|In-Memory OLTP|(Full version of the database) The table types are all memory-optimized, such that table-valued parameters (TVPs) all benefit from memory-optimization.

The two monitoring tables, `Warehouse.VehicleTemperatures` and `Warehouse.ColdRoomTemperatures`, are memory-optimized. This allows the ColdRoomTemperatures table to be populated at higher speed than a traditional disk-based table. The VehicleTemperatures table holds the JSON payload and lends itself to extension towards IoT scenarios. The VehicleTemperatures table further lends itself to scenarios involving EventHubs, Stream Analytics, and Power BI.

The stored procedure `Website.RecordColdRoomTemperatures` is natively compiled to further improve the performance of recording cold room temperatures.| +|Clustered columnstore index|(Full version of the database) The table `Warehouse.StockItemTransactions` uses a clustered columnstore index. The number of rows in this table is expected to grow large, and the clustered columnstore index significantly reduces the on-disk size of the table, and improves query performance. The modification on this table are insert-only - there is no update/delete on this table in the online workload - and clustered columnstore index performs well for insert workloads.| +|Dynamic Data Masking|In the database schema, Data Masking has been applied to the bank details held for Suppliers, in the table `Purchasing.Suppliers`. Non-admin staff will not have access to this information.| +|Always Encrypted|The AccountNumber field uses AlwaysEncrypted. Unfortunately, at present, it cannot also use data masking.

A demo for Always Encrypted is included in the downloadable samples.zip. The demo creates an encryption key, a table using encryption for sensitive data, and a small sample application that inserts data into the table.| +|Stretch database|The `Warehouse.ColdRoomTemperatures` table has been implemented as a temporal table, and is memory-optimized in the Full version of the sample database. The archive table is disk-based and can be stretched to Azure.| +|Full-text indexes|Full-text indexes improve searches for People, Customers, and StockItems. The indexes are applied to queries only if you have full-text indexing installed on your SQL Server instance. A non-persistent computed column is used to create the data that is full-text indexed in the StockItems table.

`CONCAT` is used for concatenating the fields to create SearchData that is full-text indexed.
To enable the use of full-text indexes in the sample execute the following statement in the database:

`EXECUTE [Application].[Configuration_ConfigureFullTextIndexing]`

The procedure creates a default fulltext catalog if one doesn’t already exist, then replaces the search views with full-text versions of those views).

Note that using full-text indexes in SQL Server requires selecting the Full-Text option during installation. Azure SQL Database does not require and specific configuration to enable full-text indexes.| +|Indexed persisted computed columns|Indexed persisted computed columns used in SupplierTransactions and CustomerTransactions.| +|Check constraints|A relatively complex check constraint is in `Sales.SpecialDeals`. This ensures that one and only one of DiscountAmount, DiscountPercentage, and UnitPrice is configured.| +|Unique constraints|A many to many construction (and unique constraints) are set up for Warehouse.StockItemStockGroups`.| +|Table partitioning|(Full version of the database) The tables `Sales.CustomerTransactions` and `Purchasing.SupplierTransactions` are both partitioned by year using the partition function `PF_TransactionDate` and the partition scheme `PS_TransactionDate`. Partitioning is used to improve the manageability of large tables.| +|List processing|An example table type `Website.OrderIDList` is provided. It is used by an example procedure `Website.InvoiceCustomerOrders`. The procedure uses Common Table Expressions (CTEs), TRY/CATCH, JSON_MODIFY, XACT_ABORT, NOCOUNT, THROW, and XACT_STATE to demonstrates the ability to process a list of orders rather than just a single order, to minimize round trips from the application to the database engine.| +|GZip compression|The `Warehouse.VehicleTemperature`s table holds full sensor data but when this data is more than a few months old, it is compressed to conserve space using the COMPRESS function, which uses GZip compression.

The view `Website.VehicleTemperatures` uses the DECOMPRESS function when retrieving data that was previously compressed.| +|Query Store|Query Store is enabled on the database. After running a few queries, open the database in Management Studio, open the node Query Store, which is under the database, and open the report Top Resource Consuming Queries to see the query executions and the plans for the queries you just ran.| +|STRING_SPLIT|The column `DeliveryInstructions` in the table `Sales.Invoices`has a comma-delimited value that can be used to demonstrate STRING_SPLIT.| +|Audit|SQL Server Audit can be enabled for this sample database by running the following statement in the database:

`EXECUTE [Application].[Configuration_ApplyAuditing]`

In Azure SQL Database, auditing is enabled through the [Azure portal](https://portal.azure.com/).

Security operations involving logins, roles and permissions are logged on all systems where audit is enabled (including standard edition systems). Audit is directed to the application log because this is available on all systems and does not require additional permissions. A warning is given that for higher security, it should be redirected to the security log or to a file in a secure folder. A link is provided to describe the required additional configuration.

For evaluation/developer/enterprise edition systems, access to all financial transactional data is audited.| diff --git a/samples/databases/wide-world-importers/documentation/wwi-overview.md b/samples/databases/wide-world-importers/documentation/wwi-overview.md new file mode 100644 index 0000000000..58b75fb515 --- /dev/null +++ b/samples/databases/wide-world-importers/documentation/wwi-overview.md @@ -0,0 +1,53 @@ +# Wide World Importers Overview + +This is an overview of the fictitious company Wide World Importers and the workflows that are addressed in the WideWorldImporters sample databases for SQL Server and Azure SQL Database. + +Wide World Importers (WWI) is a wholesale novelty goods importer and distributor operating from the San Francisco bay area. + +As a wholesaler, WWI’s customers are mostly companies who resell to individuals. WWI sells to retail customers across the United States including specialty stores, supermarkets, computing stores, tourist attraction shops, and some individuals. WWI also sells to other wholesalers via a network of agents who promote the products on WWI’s behalf. While all of WWI’s customers are currently based in the United States, the company is intending to push for expansion into other countries. + +WWI buys goods from suppliers including novelty and toy manufacturers, and other novelty wholesalers. They stock the goods in their WWI warehouse and reorder from suppliers as needed to fulfil customer orders. They also purchase large volumes of packaging materials, and sell these in smaller quantities as a convenience for the customers. + +Recently WWI started to sell a variety of edible novelties such as chilly chocolates. The company previously did not have to handle chilled items. Now, to meet food handling requirements, they must monitor the temperature in their chiller room and any of their trucks that have chiller sections. + +## Workflow for warehouse stock items + +The typical flow for how items are stocked and distributed is as follows: +- WWI creates purchase orders and submits the orders to the suppliers. +- Suppliers send the items, WWI receives them and stocks them in their warehouse. +- Customers order items from WWI +- WWI fills the customer order with stock items in the warehouse, and when they do not have sufficient stock, they order the additional stock from the suppliers. +- Some customers do not want to wait for items that are not in stock. If they order say five different stock items, and four are available, they want to receive the four items and backorder the remaining item. The item would them be sent later in a separate shipment. +- WWI invoices customers for the stock items, typically by converting the order to an invoice. +- Customers might order items that are not in stock. These items are backordered. +- WWI delivers stock items to customers either via their own delivery vans, or via other couriers or freight methods. +- Customers pay invoices to WWI. +- Periodically, WWI pays suppliers for items that were on purchase orders. This is often sometime after they have received the goods. + +## Data Warehouse and analysis workflow + +While the team at WWI use SQL Server Reporting Services to generate operational reports from the WideWorldImporters database, they also need to perform analytics on their data and need to generate strategic reports. The team have created a dimensional data model in a database WideWorldImportersDW. This database is populated by an Integration Services package. + +SQL Server Analysis Services is used to create analytic data models from the data in the dimensional data model. SQL Server Reporting Services is used to generate strategic reports directly from the dimensional data model, and also from the analytic model. Power BI is used to create dashboards from the same data. The dashboards are used on websites, and on phones and tablets. *Note: these data models and reports are not yet available* + +## Additional workflows + +These are additional workflows. +- WWI issues credit notes when a customer does not receive the good for some reason, or when the goods are faulty. These are treated as negative invoices. +- WWI periodically counts the on-hand quantities of stock items to ensure that the stock quantities shown as available on their system are accurate. (The process of doing this is called a stocktake). +- Cold room temperatures. Perishable goods are stored in refrigerated rooms. Sensor data from these rooms is ingested into the database for monitoring and analytics purposes. +- Vehicle location tracking. Vehicles that transport goods for WWI include sensors that track the location. This location is again ingested into the database for monitoring and further analytics. + +## Version of the application + +World Wide Importers has migrated from their previous system to this new SQL Server WWI database system starting January 1, 2013. They migrated to the latest version of SQL Server and started leveraging Azure SQL Database in 2016 to benefit from all the new capabilities. + +## Fiscal year + +The company operates with a financial year that starts on November 1st. + +## Terms of use + +The license for the sample database and the sample code is described here: [license.txt](https://github.com/Microsoft/sql-server-samples/blob/master/license.txt) + +The sample database includes public data that has been loaded from data.gov and Natural EarthData. The terms of use are here: [http://www.naturalearthdata.com/about/terms-of-use/](http://www.naturalearthdata.com/about/terms-of-use/) diff --git a/samples/databases/wide-world-importers/sample-scripts/in-memory-oltp/README.md b/samples/databases/wide-world-importers/sample-scripts/in-memory-oltp/README.md index 1b29fc0d88..2e6192f81f 100644 --- a/samples/databases/wide-world-importers/sample-scripts/in-memory-oltp/README.md +++ b/samples/databases/wide-world-importers/sample-scripts/in-memory-oltp/README.md @@ -50,7 +50,7 @@ To run this sample, you need the following prerequisites. This script creates comparable disk-based and memory-optimized tables, as well as corresponding stored procedures, for vehicle location insertion. -It then compares the performance of single-threaded insert of 500,000 row: +It then compares the performance of single-threaded insert of 500,000 rows: - into a disk-based table - into a memory-optimized table - into a memory-optimized table, with rows generated in a natively compiled stored procedure diff --git a/samples/databases/wide-world-importers/sample-scripts/polybase/DemonstratePolybase.sql b/samples/databases/wide-world-importers/sample-scripts/polybase/DemonstratePolybase.sql index 8f4a30ccf2..2675074f8c 100644 --- a/samples/databases/wide-world-importers/sample-scripts/polybase/DemonstratePolybase.sql +++ b/samples/databases/wide-world-importers/sample-scripts/polybase/DemonstratePolybase.sql @@ -4,8 +4,8 @@ USE WideWorldImportersDW; GO --- WideWorldImporters have customers in a variety of cities but feel they are likely missing --- other important cities. They have decided to try to find other cities have a growth rate of more +-- WideWorldImporters have customers in a variety of cities but feel they are likely missing +-- other important cities. They have decided to try to find other cities have a growth rate of more -- than 20% over the last 3 years, and where they do not have existing customers. -- They have obtained census data (a CSV file) and have loaded it into an Azure storage account. -- They want to combine that data with other data in their main OLTP database to work out where @@ -22,28 +22,28 @@ GO -- Expand the dbo.CityPopulationStatistics table, expand the list of columns and note the -- values that are contained. Let's look at the data: -SELECT * FROM dbo.CityPopulationStatistics; +SELECT CityID, StateProvinceCode, CityName, YearNumber, LatestRecordedPopulation FROM dbo.CityPopulationStatistics; GO -- How did that work? First the procedure created an external data source like this: /* -CREATE EXTERNAL DATA SOURCE AzureStorage -WITH +CREATE EXTERNAL DATA SOURCE AzureStorage +WITH ( TYPE=HADOOP, LOCATION = 'wasbs://data@sqldwdatasets.blob.core.windows.net' ); */ --- This shows how to connect to AzureStorage. Next the procedure created an +-- This shows how to connect to AzureStorage. Next the procedure created an -- external file format to describe the layout of the CSV file: /* -CREATE EXTERNAL FILE FORMAT CommaDelimitedTextFileFormat -WITH +CREATE EXTERNAL FILE FORMAT CommaDelimitedTextFileFormat +WITH ( - FORMAT_TYPE = DELIMITEDTEXT, - FORMAT_OPTIONS + FORMAT_TYPE = DELIMITEDTEXT, + FORMAT_OPTIONS ( FIELD_TERMINATOR = ',' ) @@ -61,9 +61,9 @@ CREATE EXTERNAL TABLE dbo.CityPopulationStatistics YearNumber int NOT NULL, LatestRecordedPopulation bigint NULL ) -WITH -( - LOCATION = '/', +WITH +( + LOCATION = '/', DATA_SOURCE = AzureStorage, FILE_FORMAT = CommaDelimitedTextFileFormat, REJECT_TYPE = VALUE, @@ -71,7 +71,7 @@ WITH ); */ --- From that point onwards, the external table can be used like a local table. Let's run that +-- From that point onwards, the external table can be used like a local table. Let's run that -- query that they wanted to use to find out which cities they should be finding new customers -- in. We'll start building the query by grouping the cities from the external table -- and finding those with more than a 20% growth rate for the period: @@ -79,17 +79,17 @@ WITH WITH PotentialCities AS ( - SELECT cps.CityName, + SELECT cps.CityName, cps.StateProvinceCode, MAX(cps.LatestRecordedPopulation) AS PopulationIn2016, - (MAX(cps.LatestRecordedPopulation) - MIN(cps.LatestRecordedPopulation)) * 100.0 + (MAX(cps.LatestRecordedPopulation) - MIN(cps.LatestRecordedPopulation)) * 100.0 / MIN(cps.LatestRecordedPopulation) AS GrowthRate FROM dbo.CityPopulationStatistics AS cps WHERE cps.LatestRecordedPopulation IS NOT NULL - AND cps.LatestRecordedPopulation <> 0 + AND cps.LatestRecordedPopulation <> 0 GROUP BY cps.CityName, cps.StateProvinceCode ) -SELECT * +SELECT CityName, StateProvinceCode, PopulationIn2016, GrowthRate FROM PotentialCities WHERE GrowthRate > 2.0; GO @@ -100,31 +100,31 @@ GO WITH PotentialCities AS ( - SELECT cps.CityName, + SELECT cps.CityName, cps.StateProvinceCode, MAX(cps.LatestRecordedPopulation) AS PopulationIn2016, - (MAX(cps.LatestRecordedPopulation) - MIN(cps.LatestRecordedPopulation)) * 100.0 + (MAX(cps.LatestRecordedPopulation) - MIN(cps.LatestRecordedPopulation)) * 100.0 / MIN(cps.LatestRecordedPopulation) AS GrowthRate FROM dbo.CityPopulationStatistics AS cps WHERE cps.LatestRecordedPopulation IS NOT NULL - AND cps.LatestRecordedPopulation <> 0 + AND cps.LatestRecordedPopulation <> 0 GROUP BY cps.CityName, cps.StateProvinceCode ), InterestingCities AS ( - SELECT DISTINCT pc.CityName, - pc.StateProvinceCode, + SELECT DISTINCT pc.CityName, + pc.StateProvinceCode, pc.PopulationIn2016, FLOOR(pc.GrowthRate) AS GrowthRate FROM PotentialCities AS pc INNER JOIN Dimension.City AS c - ON pc.CityName = c.City + ON pc.CityName = c.City WHERE GrowthRate > 2.0 AND NOT EXISTS (SELECT 1 FROM Fact.Sale AS s WHERE s.[City Key] = c.[City Key]) ) -SELECT TOP(100) * -FROM InterestingCities +SELECT TOP(100) CityName, StateProvinceCode, PopulationIn2016, GrowthRate +FROM InterestingCities ORDER BY PopulationIn2016 DESC; GO @@ -136,4 +136,4 @@ DROP EXTERNAL FILE FORMAT CommaDelimitedTextFileFormat; GO DROP EXTERNAL DATA SOURCE AzureStorage; GO -*/ \ No newline at end of file +*/ diff --git a/samples/databases/wide-world-importers/workload-drivers/vehicle-location-insert/.vs/MultithreadedInMemoryTableInsert/v14/.suo b/samples/databases/wide-world-importers/workload-drivers/vehicle-location-insert/.vs/MultithreadedInMemoryTableInsert/v14/.suo new file mode 100644 index 0000000000..1f40f9792e Binary files /dev/null and b/samples/databases/wide-world-importers/workload-drivers/vehicle-location-insert/.vs/MultithreadedInMemoryTableInsert/v14/.suo differ diff --git a/samples/features/in-memory/ticket-reservations/TicketReservations/bin/Release/DeploymentReport.txt b/samples/features/in-memory/ticket-reservations/TicketReservations/bin/Release/DeploymentReport.txt new file mode 100644 index 0000000000..be03fc29ec --- /dev/null +++ b/samples/features/in-memory/ticket-reservations/TicketReservations/bin/Release/DeploymentReport.txt @@ -0,0 +1,34 @@ +** Warnings + User level transactions are not supported for memory optimized objects. You must disable the 'Include transactional + scripts' deployment option to successfully deploy changes to memory optimized objects. + +** Highlights + Tables that will be rebuilt + None + Clustered indexes that will be dropped + [dbo].[sql_ts_th] on [dbo].[TicketReservationDetail] + Clustered indexes that will be created + None + Possible data issues + The table [dbo].[TicketReservationDetail] is being dropped and re-created since all non-computed columns within the + table have been redefined. + +** User actions + Create + [mod] (Filegroup) + [dbo].[TicketReservationDetail] (Table) + [dbo].[InsertReservationDetails] (Procedure) + Drop + [dbo].[InsertReservationDetails] (Procedure) + [dbo].[sql_ts_th] (Primary Key) + [dbo].[TicketReservationDetail] (Table) + Alter + [dbo].[ReadMultipleReservations] (Procedure) + [dbo].[BatchInsertReservations] (Procedure) + +** Supporting actions + Refresh + [dbo].[Demo_Reset] (Procedure) + +The table [dbo].[TicketReservationDetail] is being dropped and re-created since all non-computed columns within the table have been redefined. + diff --git a/samples/features/in-memory/ticket-reservations/TicketReservations/bin/Release/DeploymentReport_1.txt b/samples/features/in-memory/ticket-reservations/TicketReservations/bin/Release/DeploymentReport_1.txt new file mode 100644 index 0000000000..4b7c860ca7 --- /dev/null +++ b/samples/features/in-memory/ticket-reservations/TicketReservations/bin/Release/DeploymentReport_1.txt @@ -0,0 +1,25 @@ +** Warnings + User level transactions are not supported for memory optimized objects. You must disable the 'Include transactional + scripts' deployment option to successfully deploy changes to memory optimized objects. + +** Highlights + Tables that will be rebuilt + None + Clustered indexes that will be dropped + None + Clustered indexes that will be created + None + Possible data issues + None + +** User actions + Drop + [dbo].[InsertReservationDetails] (Procedure) + Create + [dbo].[TicketReservationDetail] (Table) + [dbo].[InsertReservationDetails] (Procedure) + Alter + [dbo].[ReadMultipleReservations] (Procedure) + [dbo].[BatchInsertReservations] (Procedure) + +** Supporting actions diff --git a/samples/features/in-memory/ticket-reservations/TicketReservations/bin/Release/TicketReservations.publish.sql b/samples/features/in-memory/ticket-reservations/TicketReservations/bin/Release/TicketReservations.publish.sql new file mode 100644 index 0000000000..d7cd1804d0 --- /dev/null +++ b/samples/features/in-memory/ticket-reservations/TicketReservations/bin/Release/TicketReservations.publish.sql @@ -0,0 +1,258 @@ +/* +Deployment script for TicketReservations + +This code was generated by a tool. +Changes to this file may cause incorrect behavior and will be lost if +the code is regenerated. +*/ + +GO +SET ANSI_NULLS, ANSI_PADDING, ANSI_WARNINGS, ARITHABORT, CONCAT_NULL_YIELDS_NULL, QUOTED_IDENTIFIER ON; + +SET NUMERIC_ROUNDABORT OFF; + + +GO +:setvar DatabaseName "TicketReservations" +:setvar DefaultFilePrefix "TicketReservations" +:setvar DefaultDataPath "C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\" +:setvar DefaultLogPath "C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\" + +GO +:on error exit +GO +/* +Detect SQLCMD mode and disable script execution if SQLCMD mode is not supported. +To re-enable the script after enabling SQLCMD mode, execute the following: +SET NOEXEC OFF; +*/ +:setvar __IsSqlCmdEnabled "True" +GO +IF N'$(__IsSqlCmdEnabled)' NOT LIKE N'True' + BEGIN + PRINT N'SQLCMD mode must be enabled to successfully execute this script.'; + SET NOEXEC ON; + END + + +GO +PRINT N'Creating [mod]...'; + + +GO +ALTER DATABASE [$(DatabaseName)] + ADD FILEGROUP [mod] CONTAINS MEMORY_OPTIMIZED_DATA; + + +GO +ALTER DATABASE [$(DatabaseName)] + ADD FILE (NAME = [mod_4C0B6475], FILENAME = N'$(DefaultDataPath)$(DefaultFilePrefix)_mod_4C0B6475.mdf') TO FILEGROUP [mod]; + + +GO +IF EXISTS (SELECT 1 + FROM [master].[dbo].[sysdatabases] + WHERE [name] = N'$(DatabaseName)') + BEGIN + ALTER DATABASE [$(DatabaseName)] + SET ANSI_NULLS ON, + ANSI_PADDING ON, + ANSI_WARNINGS ON, + ARITHABORT ON, + CONCAT_NULL_YIELDS_NULL ON, + QUOTED_IDENTIFIER ON, + ANSI_NULL_DEFAULT ON, + CURSOR_DEFAULT LOCAL + WITH ROLLBACK IMMEDIATE; + END + + +GO +IF EXISTS (SELECT 1 + FROM [master].[dbo].[sysdatabases] + WHERE [name] = N'$(DatabaseName)') + BEGIN + ALTER DATABASE [$(DatabaseName)] + SET PAGE_VERIFY NONE, + DISABLE_BROKER + WITH ROLLBACK IMMEDIATE; + END + + +GO +IF EXISTS (SELECT 1 + FROM [master].[dbo].[sysdatabases] + WHERE [name] = N'$(DatabaseName)') + BEGIN + ALTER DATABASE [$(DatabaseName)] + SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT = ON + WITH ROLLBACK IMMEDIATE; + END + + +GO +USE [$(DatabaseName)]; + + +GO +/* +The table [dbo].[TicketReservationDetail] is being dropped and re-created since all non-computed columns within the table have been redefined. +*/ + +IF EXISTS (select top 1 1 from [dbo].[TicketReservationDetail]) + RAISERROR (N'Rows were detected. The schema update is terminating because data loss might occur.', 16, 127) WITH NOWAIT + +GO +PRINT N'Dropping [dbo].[InsertReservationDetails]...'; + + +GO +DROP PROCEDURE [dbo].[InsertReservationDetails]; + + +GO +PRINT N'Dropping [dbo].[sql_ts_th]...'; + + +GO +ALTER TABLE [dbo].[TicketReservationDetail] DROP CONSTRAINT [sql_ts_th]; + + +GO +PRINT N'Dropping [dbo].[TicketReservationDetail]...'; + + +GO +DROP TABLE [dbo].[TicketReservationDetail]; + + +GO +PRINT N'Creating [dbo].[TicketReservationDetail]...'; + + +GO +CREATE TABLE [dbo].[TicketReservationDetail] ( + [TicketReservationID] BIGINT NOT NULL, + [TicketReservationDetailID] BIGINT IDENTITY (1, 1) NOT NULL, + [Quantity] INT NOT NULL, + [FlightID] INT NOT NULL, + [Comment] NVARCHAR (1000) NULL, + CONSTRAINT [PK_TicketReservationDetail] PRIMARY KEY NONCLUSTERED ([TicketReservationDetailID] ASC) +) +WITH (MEMORY_OPTIMIZED = ON); + + +GO +PRINT N'Creating [dbo].[InsertReservationDetails]...'; + + +GO +/* +CREATE PROCEDURE InsertReservationDetails(@TicketReservationID int, @LineCount int, @Comment NVARCHAR(1000), @FlightID int) +AS +BEGIN + DECLARE @loop int = 0; + WHILE (@loop < @LineCount) + BEGIN + INSERT INTO dbo.TicketReservationDetail (TicketReservationID, Quantity, FlightID, Comment) + VALUES(@TicketReservationID, @loop % 8 + 1, @FlightID, @Comment); + SET @loop += 1; + END +END +*/ + + +-- natively compiled version of the stored procedure: +CREATE PROCEDURE InsertReservationDetails(@TicketReservationID int, @LineCount int, @Comment NVARCHAR(1000), @FlightID int) +WITH NATIVE_COMPILATION, SCHEMABINDING +as +BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE=N'English') + + + DECLARE @loop int = 0; + while (@loop < @LineCount) + BEGIN + INSERT INTO dbo.TicketReservationDetail (TicketReservationID, Quantity, FlightID, Comment) + VALUES(@TicketReservationID, @loop % 8 + 1, @FlightID, @Comment); + SET @loop += 1; + END +END +GO +PRINT N'Altering [dbo].[ReadMultipleReservations]...'; + + +GO +ALTER PROCEDURE ReadMultipleReservations(@ServerTransactions int, @RowsPerTransaction int, @ThreadID int) +AS +BEGIN + DECLARE @tranCount int = 0; + DECLARE @CurrentSeq int = 0; + DECLARE @Sum int = 0; + DECLARE @loop int = 0; + WHILE (@tranCount < @ServerTransactions) + BEGIN + BEGIN TRY + SELECT @CurrentSeq = RAND() * IDENT_CURRENT(N'dbo.TicketReservationDetail') + SET @loop = 0 + BEGIN TRAN + WHILE (@loop < @RowsPerTransaction) + BEGIN + SELECT @Sum += FlightID from dbo.TicketReservationDetail where TicketReservationDetailID = @CurrentSeq - @loop; + SET @loop += 1; + END + COMMIT TRAN + END TRY + BEGIN CATCH + IF XACT_STATE() = -1 + ROLLBACK TRAN + ;THROW + END CATCH + SET @tranCount += 1; + END +END +GO +PRINT N'Altering [dbo].[BatchInsertReservations]...'; + + +GO +-- helper stored procedure to + +ALTER PROCEDURE BatchInsertReservations(@ServerTransactions int, @RowsPerTransaction int, @ThreadID int) +AS +BEGIN + DECLARE @tranCount int = 0; + DECLARE @TS Datetime2; + DECLARE @Char_TS NVARCHAR(23); + DECLARE @CurrentSeq int = 0; + + SET @TS = SYSDATETIME(); + SET @Char_TS = CAST(@TS AS NVARCHAR(23)); + WHILE (@tranCount < @ServerTransactions) + BEGIN + BEGIN TRY + BEGIN TRAN + SET @CurrentSeq = NEXT VALUE FOR TicketReservationSequence ; + EXEC InsertReservationDetails @CurrentSeq, @RowsPerTransaction, @Char_TS, @ThreadID; + COMMIT TRAN + END TRY + BEGIN CATCH + IF XACT_STATE() = -1 + ROLLBACK TRAN + ;THROW + END CATCH + SET @tranCount += 1; + END +END +GO +PRINT N'Refreshing [dbo].[Demo_Reset]...'; + + +GO +EXECUTE sp_refreshsqlmodule N'[dbo].[Demo_Reset]'; + + +GO +PRINT N'Update complete.'; + + +GO diff --git a/samples/features/in-memory/ticket-reservations/TicketReservations/bin/Release/TicketReservations_1.publish.sql b/samples/features/in-memory/ticket-reservations/TicketReservations/bin/Release/TicketReservations_1.publish.sql new file mode 100644 index 0000000000..140cf5cf6c --- /dev/null +++ b/samples/features/in-memory/ticket-reservations/TicketReservations/bin/Release/TicketReservations_1.publish.sql @@ -0,0 +1,171 @@ +/* +Deployment script for TicketReservations + +This code was generated by a tool. +Changes to this file may cause incorrect behavior and will be lost if +the code is regenerated. +*/ + +GO +SET ANSI_NULLS, ANSI_PADDING, ANSI_WARNINGS, ARITHABORT, CONCAT_NULL_YIELDS_NULL, QUOTED_IDENTIFIER ON; + +SET NUMERIC_ROUNDABORT OFF; + + +GO +:setvar DatabaseName "TicketReservations" +:setvar DefaultFilePrefix "TicketReservations" +:setvar DefaultDataPath "C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\" +:setvar DefaultLogPath "C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\" + +GO +:on error exit +GO +/* +Detect SQLCMD mode and disable script execution if SQLCMD mode is not supported. +To re-enable the script after enabling SQLCMD mode, execute the following: +SET NOEXEC OFF; +*/ +:setvar __IsSqlCmdEnabled "True" +GO +IF N'$(__IsSqlCmdEnabled)' NOT LIKE N'True' + BEGIN + PRINT N'SQLCMD mode must be enabled to successfully execute this script.'; + SET NOEXEC ON; + END + + +GO +USE [$(DatabaseName)]; + + +GO +PRINT N'Dropping [dbo].[InsertReservationDetails]...'; + + +GO +DROP PROCEDURE [dbo].[InsertReservationDetails]; + + +GO +PRINT N'Creating [dbo].[TicketReservationDetail]...'; + + +GO +CREATE TABLE [dbo].[TicketReservationDetail] ( + [TicketReservationID] BIGINT NOT NULL, + [TicketReservationDetailID] BIGINT IDENTITY (1, 1) NOT NULL, + [Quantity] INT NOT NULL, + [FlightID] INT NOT NULL, + [Comment] NVARCHAR (1000) NULL, + CONSTRAINT [PK_TicketReservationDetail] PRIMARY KEY NONCLUSTERED ([TicketReservationDetailID] ASC) +) +WITH (MEMORY_OPTIMIZED = ON); + + +GO +PRINT N'Creating [dbo].[InsertReservationDetails]...'; + + +GO +/* +CREATE PROCEDURE InsertReservationDetails(@TicketReservationID int, @LineCount int, @Comment NVARCHAR(1000), @FlightID int) +AS +BEGIN + DECLARE @loop int = 0; + WHILE (@loop < @LineCount) + BEGIN + INSERT INTO dbo.TicketReservationDetail (TicketReservationID, Quantity, FlightID, Comment) + VALUES(@TicketReservationID, @loop % 8 + 1, @FlightID, @Comment); + SET @loop += 1; + END +END +*/ + + +-- natively compiled version of the stored procedure: +CREATE PROCEDURE InsertReservationDetails(@TicketReservationID int, @LineCount int, @Comment NVARCHAR(1000), @FlightID int) +WITH NATIVE_COMPILATION, SCHEMABINDING +as +BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE=N'English') + + + DECLARE @loop int = 0; + while (@loop < @LineCount) + BEGIN + INSERT INTO dbo.TicketReservationDetail (TicketReservationID, Quantity, FlightID, Comment) + VALUES(@TicketReservationID, @loop % 8 + 1, @FlightID, @Comment); + SET @loop += 1; + END +END +GO +PRINT N'Altering [dbo].[ReadMultipleReservations]...'; + + +GO +ALTER PROCEDURE ReadMultipleReservations(@ServerTransactions int, @RowsPerTransaction int, @ThreadID int) +AS +BEGIN + DECLARE @tranCount int = 0; + DECLARE @CurrentSeq int = 0; + DECLARE @Sum int = 0; + DECLARE @loop int = 0; + WHILE (@tranCount < @ServerTransactions) + BEGIN + BEGIN TRY + SELECT @CurrentSeq = RAND() * IDENT_CURRENT(N'dbo.TicketReservationDetail') + SET @loop = 0 + BEGIN TRAN + WHILE (@loop < @RowsPerTransaction) + BEGIN + SELECT @Sum += FlightID from dbo.TicketReservationDetail where TicketReservationDetailID = @CurrentSeq - @loop; + SET @loop += 1; + END + COMMIT TRAN + END TRY + BEGIN CATCH + IF XACT_STATE() = -1 + ROLLBACK TRAN + ;THROW + END CATCH + SET @tranCount += 1; + END +END +GO +PRINT N'Altering [dbo].[BatchInsertReservations]...'; + + +GO +-- helper stored procedure to + +ALTER PROCEDURE BatchInsertReservations(@ServerTransactions int, @RowsPerTransaction int, @ThreadID int) +AS +BEGIN + DECLARE @tranCount int = 0; + DECLARE @TS Datetime2; + DECLARE @Char_TS NVARCHAR(23); + DECLARE @CurrentSeq int = 0; + + SET @TS = SYSDATETIME(); + SET @Char_TS = CAST(@TS AS NVARCHAR(23)); + WHILE (@tranCount < @ServerTransactions) + BEGIN + BEGIN TRY + BEGIN TRAN + SET @CurrentSeq = NEXT VALUE FOR TicketReservationSequence ; + EXEC InsertReservationDetails @CurrentSeq, @RowsPerTransaction, @Char_TS, @ThreadID; + COMMIT TRAN + END TRY + BEGIN CATCH + IF XACT_STATE() = -1 + ROLLBACK TRAN + ;THROW + END CATCH + SET @tranCount += 1; + END +END +GO +PRINT N'Update complete.'; + + +GO diff --git a/samples/features/in-memory/ticket-reservations/TicketReservations/dbo/Stored Procedures/InsertReservationDetails.sql b/samples/features/in-memory/ticket-reservations/TicketReservations/dbo/Stored Procedures/InsertReservationDetails.sql index 70324630b6..56fda95997 100644 --- a/samples/features/in-memory/ticket-reservations/TicketReservations/dbo/Stored Procedures/InsertReservationDetails.sql +++ b/samples/features/in-memory/ticket-reservations/TicketReservations/dbo/Stored Procedures/InsertReservationDetails.sql @@ -1,4 +1,4 @@ - +/* CREATE PROCEDURE InsertReservationDetails(@TicketReservationID int, @LineCount int, @Comment NVARCHAR(1000), @FlightID int) AS BEGIN @@ -10,9 +10,9 @@ BEGIN SET @loop += 1; END END +*/ -/* -- natively compiled version of the stored procedure: CREATE PROCEDURE InsertReservationDetails(@TicketReservationID int, @LineCount int, @Comment NVARCHAR(1000), @FlightID int) WITH NATIVE_COMPILATION, SCHEMABINDING @@ -28,4 +28,3 @@ BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE=N'English') SET @loop += 1; END END -*/ \ No newline at end of file diff --git a/samples/features/in-memory/ticket-reservations/TicketReservations/dbo/Tables/TicketReservationDetail.sql b/samples/features/in-memory/ticket-reservations/TicketReservations/dbo/Tables/TicketReservationDetail.sql index 9d0d059a68..b1e67e9d90 100644 --- a/samples/features/in-memory/ticket-reservations/TicketReservations/dbo/Tables/TicketReservationDetail.sql +++ b/samples/features/in-memory/ticket-reservations/TicketReservations/dbo/Tables/TicketReservationDetail.sql @@ -5,10 +5,11 @@ FlightID INT NOT NULL, Comment NVARCHAR (1000), -- disk-based table: +/* CONSTRAINT [PK_TicketReservationDetail] PRIMARY KEY CLUSTERED (TicketReservationDetailID) ); +*/ -/* -- for memory-optimized, replace the last two lines with the following: CONSTRAINT [PK_TicketReservationDetail] PRIMARY KEY NONCLUSTERED (TicketReservationDetailID) ) WITH (MEMORY_OPTIMIZED=ON); @@ -17,4 +18,3 @@ GO -- For SQL Server, include the following filegroup. For Azure DB, leave out the filegroup ALTER DATABASE [$(DatabaseName)] ADD FILEGROUP [mod] CONTAINS MEMORY_OPTIMIZED_DATA -*/ \ No newline at end of file