Alter table
Hi Team, Could you please suggest, Do we have an alternate approach to alter the table instead of creating a new table and copying the data as part of the deployment.
- 6323 Views
- 0 replies
- 0 kudos
Hi Team, Could you please suggest, Do we have an alternate approach to alter the table instead of creating a new table and copying the data as part of the deployment.
Hi all! I am having the following issue with a couple of pyspark streams. I have some notebooks running each of them an independent file structured streaming using delta bronze table (gzip parquet files) dumped from kinesis to S3 in a previous job....
I am trying to use the constraints options:NOT ENFORCEDDEFERRABLEINITIALLY DEFERREDNORELYHowever it seems I am not able to use them successfully. When I try to use them with PRIMARY KEYS (not sure if it is possible), I am not able to enforce any key....
Hi @Cert-Team ,My Databricks exam got suspended on December 9, 2023, at 11:30, and it is still in the suspended state.During the exam, it was initially paused due to poor lighting, but after addressing that, it worked fine. However, after some time, ...
Hi @Jay_adb I'm sorry to hear you had this issue. Thanks for filing a ticket with the support team. I have sent a message to them to look into your ticket and resolve asap.
Is there a way to perform a dry-run with "bundle deploy" in order to see the job configuration changes for an environment without actually deploying the changes?
Hello Community Members, We value your experience and want to make it even better! Help us shape the future by sharing your thoughts through our quick Survey. Ready to have your voice heard? Click here and take a few moments to complete the surv...
I am trying to create a multi-task Databricks Job in Azure Cloud with its own cluster.Although I was able to create a single task job without any issues, the code to deploy the multi-task job fails due to the following cluster validation error:error:...
Hello @Retired_mod, thanks for your answer, but the problem keeps the same. I had already tested with different cluster configurations, single-node and multi-node, including those cluster configurations which worked with single task jobs, but the err...
We are trying to capture the query executed by spark .We are trying to use df.queryExecution.redactedSql to get the SQL from query execution but it is not working in sqlListener
Hello,I am facing similar kind of issue. I am working on Power BI paginated report and databricks is my source for the report. I was trying to pass the parameter by passing the query in expression builder as mentioned below. https://community.databri...
Hi,I am new to databricks, We are trying to use Databricks asset bundles for code deployment .I have spect a lot of time but still so many things are not clear to me.Can we change the target path of the notebooks deployed from /shared/.bundle/* to so...
Hi @Retired_mod,Thank you for your post, I thought it will solve my issues too, however after reading your suggestion it was nothing new for me, because l already have done exactly that.. Here is what I have dome so you or anyone can replicate it:1. ...
We needed to move to databricks-connect>13.x. Now I facing the issue that when I work with a nested dataframe of the structure```root|-- a: string (nullable = true)|-- b: array (nullable = true)| |-- element: struct (containsNull = true)| | |-- c: s...
In addition here is the full stack trace23/12/07 14:51:56 ERROR SerializingExecutor: Exception while executing runnable grpc_shaded.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable@33dfd6ecgrpc_shaded.io.grpc...
Hi Team,My requirement is ,i do have File A from source A which needs to write into Multiple Delta tables i.e DeltaTableA,DeltaTableB,DeltaTableC. Is it possible to have a single instance of an autoloader script. (multiple write streams). Could you p...
I have an SCD Type 1 delta table (target) for which I am trying to figure out how to facilitate insert, updates, and deletes. This table is sourced by multiple delta tables, with an SCD Type 2 structure, which are joined together to create the targe...
Correction (I can't seem to edit or remove original post):- "... trying to think through an process" --> *a* process- "Thoughts and advice or much appreciated" --> Thoughts and/or advice are much appreciated.
I am trying to run a TRUNCATE command on my exasol DWH from Databricks using the pyexasol. This works perfectly fine when I have the cluster access mode as "No Isolation Shared" which does not have access to our Unity Catalog. When I change the clust...
Interesting. Did you try with "Single User" mode, which also has UC support?
I want to connect to Databricks from Knime on a company computer that uses a proxy. The error I'm encountering is as follows: ERROR Create Databricks Environment 3:1 Execute failed: Could not open the client transport with JDBC URI: jdbc:hive2://adb-...
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now| User | Count |
|---|---|
| 133 | |
| 128 | |
| 62 | |
| 57 | |
| 42 |