- 2048 Views
- 8 replies
- 4 kudos
I can't get the dynamic value references to work in my jobs. I can use the deprecate references (e.g. job_id) but not the new references (e.g. job.id). As a test, I set a text widget called MyJobID following the example that will receive the dynamic ...
- 2048 Views
- 8 replies
- 4 kudos
Latest Reply
For what it's worth, it looks like job-level parameters were added with this update as well. I was wondering why I was unable to use those, but those are also working for me now.
7 More Replies
- 1198 Views
- 1 replies
- 0 kudos
Hi, I have a workflow based on python scripts. How can I import the content of a notebook where a class and functions are defined?I know how to import python files into notebooks, but the other way around doesn't seem as straight forward.
- 1198 Views
- 1 replies
- 0 kudos
Latest Reply
Found a solution executing a notebook, using the databricks api to download the notebook content as bytes :1. set environment variables DATABRICKS_HOST and DATABRICKS_TOKEN2. w = WorkspaceClient()with w.workspace.download(notebook_path) as n: note...
- 1295 Views
- 3 replies
- 2 kudos
Hello allI have a delta table in bronze layer, let's call it BRZ. It contains 25B rows and many duplicates. It has a version 0 and a version 1, nothing else yet. I then create a silver table SLV by running one deduplication batch job. This creates ve...
- 1295 Views
- 3 replies
- 2 kudos
Latest Reply
Thanks for the confirmation. Not sure I see everything as your text gets truncated, but it basically confirms that it should work.Anyway: It looks like the incremental load is working. The problem here is, that we receive late arriving facts that tou...
2 More Replies
- 1434 Views
- 1 replies
- 0 kudos
Does anyone know how to change the format of a date like this Dec 17 2016 8:22PMinto yyyy-MM-dd hh:mm:ss?Thanks
- 1434 Views
- 1 replies
- 0 kudos
Latest Reply
Convert to timestamp first and then format to stringselect date_format(to_timestamp('Dec 17 2016 8:22PM', 'MMM dd yyyy h:ma'), "yyyy-MM-dd HH:mm:ss")Here is the documentation for this:https://docs.databricks.com/en/sql/language-manual/sql-ref-datet...
by
Randy
• New Contributor III
- 616 Views
- 1 replies
- 0 kudos
We have a process that creates a table in Synapse then attempts to write the Data generated in Databricks to it. We are able to create the table no problem but when we go to copy the data we keep getting an error that the column has a data type that ...
- 616 Views
- 1 replies
- 0 kudos
Latest Reply
Randy
New Contributor III
- 3405 Views
- 4 replies
- 0 kudos
I have took the azure datasets that are available for practice. I got the 10 days data from that dataset and now I want to save this data into DBFS in csv format. I have facing an error :" No such file or directory: 'No such file or directory: '/dbfs...
- 3405 Views
- 4 replies
- 0 kudos
Latest Reply
Hi,after some exercise you need to aware folder create in dbutils.fs.mkdirs("/dbfs/tmp/myfolder") it's created in /dbfs/dbfs/tmp/myfolderif you want to access path to_csv("/dbfs/tmp/myfolder/mytest.csv") you should created with this script dbutils.fs...
3 More Replies
- 2418 Views
- 6 replies
- 6 kudos
How to run a delta live tables pipeline in production? It uses the owner's (creator's) permissions for writing to tables, and I can't change the owner of a UC-enabled pipeline after creation. I don't want regular users to have write access to prod ta...
- 2418 Views
- 6 replies
- 6 kudos
- 402 Views
- 0 replies
- 1 kudos
Hello, I've been experimenting with trying to read and/or clone an existing Iceberg table into Databricks/Delta. I have an Azure Blob Storage container (configured to use absf for access) that contains an existing Iceberg table structure (data in par...
- 402 Views
- 0 replies
- 1 kudos
- 916 Views
- 2 replies
- 1 kudos
Hi. I have a question, and I've not been able to find an answer. I'm sure there is one...I just haven't found it through searching and browsing the docs. How much does it matter (if it is indeed that simple) if source files read by auto loader are ...
- 916 Views
- 2 replies
- 1 kudos
Latest Reply
Hi @ilarsen, According to the Azure Databricks documentation, Auto Loader incrementally and efficiently processes new data files as they arrive in cloud storage. Auto Loader can load data files from Azure data lake Storage Gen2 (ADLS Gen2) using hier...
1 More Replies
- 3264 Views
- 1 replies
- 0 kudos
Hi Team Experts, I am experiencing a high memory consumption in the other part in the memory utilization part in the metrics tab. Right now am not running any jobs but still out of 8gb driver memory 6gb is almost full by other and only 1.5 gb is t...
- 3264 Views
- 1 replies
- 0 kudos
Latest Reply
Hello,
Thanks for contacting Databricks Support.
Seems you are concern with high memory consumption in the "other" category in the driver node of a Spark cluster. As there are no logs/detail information provided, I only can address several potentia...
- 2070 Views
- 7 replies
- 1 kudos
Hi Everyone,I'm planning to use databricks python cli "install_libraries"can some one pls post examples on function install_libraries https://github.com/databricks/databricks-cli/blob/main/databricks_cli/libraries/api.py
- 2070 Views
- 7 replies
- 1 kudos
Latest Reply
Here you go using Python SDKfrom databricks.sdk import WorkspaceClientfrom databricks.sdk.service import computew = WorkspaceClient(host="yourhost", token="yourtoken")# Create an array of Library objects to be installedlibraries_to_install = [compute...
6 More Replies
- 631 Views
- 1 replies
- 0 kudos
Hi,I followed the tutorial here: https://docs.databricks.com/en/delta-live-tables/cdc.html#how-is-cdc-implemented-with-delta-live-tablesThe only change I did is that data is not appended to a table but is read from a parquet file. In practice this me...
- 631 Views
- 1 replies
- 0 kudos
Latest Reply
My bad - waiting a bit and doing a proper screen refresh does show the numbers.
- 1909 Views
- 5 replies
- 1 kudos
The Delta Table created as a result of the Dataframe returned by @dlt.create_table is confirmed to be overwritten when checked with the DECREASE HISTORY command.I want this to be handled as a CRAS, or CREATE AS SELECT, but how can I do this in python...
- 1909 Views
- 5 replies
- 1 kudos
Latest Reply
Hi @rt-slowth You can review this open source code base of Delta to know more about the DeltaTableBuilder's implementation in Python.
https://github.com/delta-io/delta/blob/master/python/delta/tables.py
4 More Replies
by
msj50
• New Contributor III
- 7450 Views
- 11 replies
- 1 kudos
My company urgently needs help, we are having severe performance problems with spark and are having to switch to a different solution if we don't get to the bottom of it.
We are on 1.3.1, using spark SQL, ORC Files with partitions and caching in me...
- 7450 Views
- 11 replies
- 1 kudos
Latest Reply
Hi @msj50 , Thank you for posting your question in our community! We are happy to assist you.
To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers your ...
10 More Replies