How to upload a Spark Dataframe to Azure Table Storage?
Is it possible to make a table in Azure Table Storage based on a Spark Dataframe using Python, any ideas?
- 1490 Views
- 0 replies
- 0 kudos
Is it possible to make a table in Azure Table Storage based on a Spark Dataframe using Python, any ideas?
I have process in Data factory, that loads CDC changes from sql server and then trigger notebook with merge to bronze and silver zone. Single notebook takes about 1 minute to run but when all 50 notebooks are fired at once the whole process takes 25 ...
I have 1 delta table that I continuously append events into, and a 2nd delta table that I continuously merge into (streamed from the 1st table) that has unique ID's where properties are updated from the events (An ID represents a unique thing that ge...
I have a DLT pipeline that has been running for weeks. Now, trying to rerun the pipeline with the same code and same data fails. I've even tried updating the compute on the cluster to about 3x of what was previously working and it still fails with ou...
I'd focus on understanding the codebase first. It'll help you decide what logic or data asset to keep or not keep when you try to optimize it. If you share the architecture of the application, the problem it solves, and some sample code here, it'll h...
I want to run an ETL job and when the job ends I would like to stop SparkSession to free my cluster's resources, by doing this I could avoid restarting the cluster, but when calling spark.stop() the job returns with status failed even though it has f...
Please refer to this Job fails, but Apache Spark tasks finish - Databricks
Hi Team,Is there a way that we can add data manually to the tables that are generated by DLT?We have done a PoC using DLT for Sep 15 to current data. Now, that they are happy, they wanted the previous data from Synapse and put into Databricks.I can e...
Hi allIn spark config for a cluster, it works well to refer to a Azure Keyvault secret in the "value" part of the name/value combo on a config row/setting.For example, this works fine (I've removed the string that is our specific storage account name...
Hello,Is there any update on this issue please? Databricks no longer recommend mounting external location, so the other way to access Azure storage is to use spark config as mentioned in this document - https://learn.microsoft.com/en-us/azure/databri...
Can anyone please help me what is wrong here in the syntax: dbutils.fs.ls;Error in SQL statement: ParseException: [PARSE_SYNTAX_ERROR] Syntax error at or near 'dbutils'.(line 1, pos 0)
Hi! Does DLT use one single SparkSession for all notebooks in a Delta Live Tables Pipeline?
The recommendation before dropping a table is to do a DELETE then VACUUM RETENTION 0 (recommended in DEV).If you DROP the table without doing a DELETE|VACUUM, your table will be soft deleted with your entire data (permanently deletedin 30 days) and y...
The table property dataSkippingNumIndexedCols that gets statistics for a table starts from left to right. I am wondering what will happen to the statistics for both new and old records if we add a column in between using FIRST|AFTER identifier.
What is the status of bamboolib? I understand that it is public preview but I'm unable to find any support references. I am getting error below. I've tried installing in a notebook, on a cluster, creating a pandas dataframe and running bam, etc. ...
Hi Databricks community team,I have code as below"""df = spark.readStream \.format("kinesis") \.option("endpointUrl", endpoint_url) \.option("streamName", stream_name) \.option("initialPosition", "latest") \.option("consumerMode", "efo") \.option("ma...
Assuming you have a catalog "my_catalog" and a schema "my_schema", the following code is not working : full_table_location = "`my_catalog`.`my_schema`.`my_table_hourl`" spark.conf.set("fullTableName", full_table_location) spark.sql("""SELECT * FRO...
hello,i'm new to Databricks (community edition account) and encountered a problem just now.When creating a new cluster (default 10.4 LTS) it fails with the following error: Backend service unavailable.I've tried a different runtime > same issue.I've ...
Hey mbvb_py,I'm sorry to hear you're facing this "Backend service unavailable" issue with Databricks. I've encountered similar problems in the past, and it can be frustrating. Don't worry; you're not alone in this!From my experience, this error can o...
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now| User | Count |
|---|---|
| 1619 | |
| 790 | |
| 486 | |
| 349 | |
| 287 |