We have a new community! Please visit support.timextender.com
Follow

Azure Synapse Dedicated SQL Pool Reference Architecture

Note: There is an updated version of this article, which applies to the next generation of TimeXtender.

AzureSynapseArch.png

This is a reference architecture to implement TimeXtender for MDW Storage using Azure Synapse Dedicated SQL Pool, for maximum performance as data becomes very big (for example, when data is at least 1 TB, or with tables of more than 1 billion rows). Dedicated SQL Pool uses Massively Parallel Processing (MPP) architecture which distributes processing across multiple compute nodes, allowing for very performant analytics queries. For more information, please see "When should I consider Azure Synapse Analytics?", in the article What is Azure Synapse Analytics?.

 

(Please note that a similarly named Azure Synapse service, Serverless SQL Pool, cannot store data but is only used for high performance, low-cost queries on the Azure Data Lake Storage (Gen2) resource associated with the Synapse resource. For more information, please see Query ODX Parquet files with Azure Synapse Workspace.)

 

To prepare your TimeXtender environment in Azure, here are the steps we recommend.

  1. Create Application Server - Azure VM
  2. Create ODX Storage - Azure Data Lake Storage Gen2
  3. Prepare for Ingest and Transport - Azure Data Factory (recommended)
  4. Create MDW Storage - Azure Synapse SQL Pool
  5. Configure PowerBI Premium Endpoint (Optional)
  6. Estimate Azure Costs

 

AzureAppServer.png

1. Create Application Server - Azure VM

To serve the TimeXtender application in Azure, we recommend using an Azure Virtual Machine (VM), sized according to your solution's requirements.
Guide:

Create a TimeXtender Application Server in Azure

Considerations:
  • Recommended Sizing: DS2_v2 (for moderate workloads)
  • If Azure VM (App Server) serves the ODX Server, it must remain running for TimeXtender to run.
  • This VM will host the services to run TimeXtender.
    • ODX Service
    • Scheduler Service
    • Server Services

DataLakeStorage.png

2. Create ODX Storage - Azure Data Lake Storage Gen2

ADLS Gen2 is highly performant, economical, scalable, and secure way to store your raw data.
Guide:

Use Azure Data Lake Storage with TimeXtender

Considerations:
  • When creating the ADLS Gen2 data lake service, you must enable Hierarchical Namespaces
  • TimeXtender writes files in Parquet file format, a highly compressed, columnar storage in the data lake.
  • It is possible for ODX Server to store data in Azure SQL DB, but this adds cost and complexity but no additional functionality
  • When using Azure Data Lake for ODX and SQL DB for the Data Warehouse, it is highly recommended to use Data Factory to transport this data
  • ADLS will require a service principle, called App Registration in Azure, for TimeXtender to access your ADF service. 
    • Both Data Lake and ADF, may share the same App Registration if desired. 

DataFactory.png

3. Prepare for Ingestion - Azure Data Factory (recommended)

For large data movement tasks, ADF provides amazing performance and ease of use for both ingestion and transport.
Guide:

Use ADF for Data Movement with TimeXtender

Considerations:
  • You cannot use ADF for transport with Synapse, since PolyBase is best practice and is used automatically by TimeXtender.
  • When creating ADF resources use Gen2, which is the current default
  • A single ADF service can be used for both transport and ingestion
    • Ingestion from data source to ODX Storage
  • The option to use ADF is not available for all data source types, but many options are available.
  • ADF Data sources do not support ODX Query Tables at this time. 
  • ADF's performance can be quite costly for such incredible fault-tolerant performance
  • ADF will require a service principle, called App Registration in Azure, for TimeXtender to access your ADF service. 
    • Both Data Lake and ADF, may share the same App Registration if desired. 

SynapseMDW.png

4. Create MDW Storage - Azure Synapse SQL Pool (SQL DW)

As your organizations data grows and performance is a key consideration Azure Synapse SQL Pool, a massively parallel processing database, can be a great option for your data warehouse storage at scale.

Guide:

Use Azure Synapse SQL Pool (SQL DW)

Considerations:
  • Recommended Synapse Use Case Criteria:
    • More that 1 TB of data
    • Tables with at least 1 billion rows
    • DWH tuning is needed to achieve max performance
  • Data tables are split up across 60 distributed nodes, in a process called sharding. There are three types of distributions (Round-robin, Replicated, and Hash), each with ideal use cases.
  • TimeXtender automatically switches to PolyBase for transport when using Synapse for a data warehouse. Note: the user interface will not reflect this change has been made.

 

PowerBI_Dataset.png

5. Configure PowerBI Premium Endpoint (Optional)

If you have Power BI Premium, TimeXtender and deploy and execute Semantic Models directly to the Power BI Premium endpoint.
Guide:

Power BI XMLA Endpoint

 

mceclip3.png

Estimate Azure Costs

Balancing cost and performance requires montioring and forecasting of your services and needs.
Guide:

Azure Pricing Calculator*

Considerations:
  • Azure provides a pricing calculator to help you estimate your costs for various configurations.

*Please note, this Azure pricing calculator does not include the price of the TimeXtender License or Consulting services.

References

The Fastest Way to Implement Azure Synapse Analytics

Was this article helpful?
0 out of 0 found this helpful

0 Comments

Please sign in to leave a comment.