- 21292 Views
- 13 replies
- 2 kudos
I have created a number of workflows in the Databricks UI. I now need to deploy them to a different workspace.How can I do that?Code can be deployed via Git, but the job definitions are stored in the workspace only.
- 21292 Views
- 13 replies
- 2 kudos
Latest Reply
@itacdonev great option provided, @Dean_Lovelace you can also select the option View JSON on the Workflow and move to the option create, with this code you can use the API https://docs.databricks.com/api/workspace/jobs/create and create the job in th...
12 More Replies
- 26962 Views
- 8 replies
- 4 kudos
Consider we have two tables A & B.qry = """INSERT INTO Table ASelect * from Table B where Id is null """spark.sql(qry)I need to get the number of records inserted after running this in databricks.
- 26962 Views
- 8 replies
- 4 kudos
Latest Reply
Almost same advice than Hubert, I use the history of the delta table :df_history.select(F.col('operationMetrics')).collect()[0].operationMetrics['numOutputRows']You can find also other 'operationMetrics' values, like 'numTargetRowsDeleted'.
7 More Replies
- 1014 Views
- 1 replies
- 0 kudos
Is there an upper bound of number that i can assign to delta.dataSkippingNumIndexedCols for computing statistics. Is there some tradeoff benchmark available for increasing this number beyond 32.
- 1014 Views
- 1 replies
- 0 kudos
Latest Reply
@Chhavi Bansal :The delta.dataSkippingNumIndexedCols configuration property controls the maximum number of columns that Delta Lake will build statistics on during data skipping. By default, this value is set to 32. There is no hard upper bound on th...