- 2212 Views
- 0 replies
- 0 kudos
- 2212 Views
- 0 replies
- 0 kudos
by
frank7
• New Contributor II
- 3681 Views
- 2 replies
- 1 kudos
I have a pyspark dataframe that contains information about the tables that I have on sql database (creation date, number of rows, etc)Sample data: {
"Day":"2023-04-28",
"Environment":"dev",
"DatabaseName":"default",
"TableName":"discount"...
- 3681 Views
- 2 replies
- 1 kudos
Latest Reply
@Bruno Simoes :Yes, it is possible to write a PySpark DataFrame to a custom log table in Log Analytics workspace using the Azure Log Analytics Workspace API.Here's a high-level overview of the steps you can follow:Create an Azure Log Analytics Works...
1 More Replies
- 2300 Views
- 2 replies
- 2 kudos
I was following the tutorial about data transformation with azure databricks, and it says before loading data into azure synapse analytics, the data transformed by azure databricks would be saved on temp storage in azure blob storage first before loa...
- 2300 Views
- 2 replies
- 2 kudos
Latest Reply
@Ajay Pandey Saving the transformed data to temporary storage in Azure Blob Storage before loading into Azure Synapse Analytics provides a number of benefits to ensure that the data is accurate, optimized, and performs well in the target environmen...
1 More Replies
- 2489 Views
- 3 replies
- 4 kudos
Hello Team,I am trying to read the data from Synapse analytics from databricks.The below is the Query to read the table.%python### Read from Azure Synapse table via spark.read.load df = spark.read \ .format("com.databricks.spark.sqldw") \ .option("ur...
- 2489 Views
- 3 replies
- 4 kudos
Latest Reply
Hi @Rohit Kulkarni Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Tha...
2 More Replies
by
TMNGB
• New Contributor II
- 1975 Views
- 0 replies
- 2 kudos
When writing data from Pyspark to Azure SQL Server (official databricks tutorial here) I am getting an error in the conversion between Spark and Parquet types.I believe this is caused in the temporary storage location mandatory when writing data from...
- 1975 Views
- 0 replies
- 2 kudos
- 5884 Views
- 3 replies
- 6 kudos
When I am trying to insert records into the azure synapse Table using JDBC Its throwing below error com.microsoft.sqlserver.jdbc.SQLServerException: The statement failed. Column 'COMPANY_ADDRESS_STATE' has a data type that cannot participate ...
- 5884 Views
- 3 replies
- 6 kudos
Latest Reply
Columns that use any of the following data types cannot be included in a columnstore index:nvarchar(max), varchar(max), and varbinary(max) (Applies to SQL Server 2016 and prior versions, and nonclustered columnstore indexes)so the issue is on the Azu...
2 More Replies