โ08-30-2022 01:40 PM
Hi there,
In my current project,
Current status: Az databricks streaming jobs migrate Json file from kafka to raw layer(parquet file), then parsing logic is applied and 8 tables are created in raw standardized layer.
Requirement: Business team wants to access this data from on prem sql server and hence propose to follow an anti pattern(moving data from cloud to on-prem) as they cannot access databricks due to technical inability.
How can this be achieved? Using ADF?
โ08-31-2022 12:04 AM
You could indeed use ADF to copy the data from cloud to on-prem.
However, depending on the size of the data, this can take a while.
I use the same pattern, but for aggregated processed data, which is not an issue at all.
You could also look at Azure Synapse Serverless or Delta Sharing, or even SQL Hybrid.
But if the data MUST be on-prem, a copy seems the only way.
In that case I'd try to use Polybase for the copy (I think recent versions of Sql Server can use this).
But I would strongly suggest to not move raw data to on-prem.
There are several possibilities to query cloud data using SQL (maybe not SQL server but Azure Synapse Serverless also uses t-sql f.e.)
โ08-30-2022 01:42 PM
@Werner Stinckensโ can u help?
โ08-31-2022 12:04 AM
You could indeed use ADF to copy the data from cloud to on-prem.
However, depending on the size of the data, this can take a while.
I use the same pattern, but for aggregated processed data, which is not an issue at all.
You could also look at Azure Synapse Serverless or Delta Sharing, or even SQL Hybrid.
But if the data MUST be on-prem, a copy seems the only way.
In that case I'd try to use Polybase for the copy (I think recent versions of Sql Server can use this).
But I would strongly suggest to not move raw data to on-prem.
There are several possibilities to query cloud data using SQL (maybe not SQL server but Azure Synapse Serverless also uses t-sql f.e.)
โ08-31-2022 03:16 PM
Hey thank you so much @Werner Stinckensโ .
Yea it is a must to move data to on -prem. Can you please share links/guide for the same?
i was going through below link, looks like it is not going to be direct copy from delta lake. I would need to use an interim Azure storage instance(staged copy from delta lake)?
โ09-01-2022 07:23 AM
I think so. The staged copy is probably a parquet version or something of the delta lake table (I do not copy delta lake to on prem rdmbs at the moment).
If the tables to be copied are not Delta Lake but Parquet, the staging is not necessary.
The staging does not take a lot of time though, certainly not compared to inserting in SQL Server!
โ09-01-2022 07:53 AM
Currently, data is being read from json file and landed into raw layer in parquet format, after that 1 table is created where entire data is inserted (1 common raw standardised table)- then using parsing logic different tables are created.
what would be your suggestion to move this data to on prem rdbms?
โ09-01-2022 07:56 AM
It depends on the use case. If your colleagues want to do ad hoc analysis on these parsed tables, then yes. But if they have a specific use case (or several), I'd prepare/transform/aggregate the data first and send that to sql server.
โ09-01-2022 09:28 AM
Agree! My only concern is the parsing logic that will have to apply while writing data to on prem sql server. Should we move this data to az sql first?
โ09-05-2022 12:13 AM
I'd apply all logic in databricks/spark as there you have the advantage of parallel processing. Write the prepared data to AZ, so no transformations have to be done in the rdbms.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group