cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Trodenn
by New Contributor III
  • 5324 Views
  • 5 replies
  • 1 kudos

Resolved! ApprodxQuantile does not seem to be working with delta live tables (DLT)

HI,I am tying to use the approxQuantile() function and populate a list that I made, yet somehow, whenever I try to run the code it's as if the list is empty and there are no values in it.Code is written as below:@dlt.table(name = "customer_order_silv...

Screenshot_20230130_053953
  • 5324 Views
  • 5 replies
  • 1 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 1 kudos

Maybe try to use (and the first test in the separate notebook) standard df = spark.read.table("customer_order_silver") to calculate approxQuantile.Of course, you need to set that customer_order_silver has a target location in the catalog, so read us...

  • 1 kudos
4 More Replies
guru1
by New Contributor II
  • 4603 Views
  • 2 replies
  • 0 kudos

Resolved! facing issue mentioned in body when connecting event hub with databricks , followed earlier discussion on this but no solution

ERROR: Query termination received for [id=37bada03-131b-4fbb-8992-a427263fef2c, runId=cf3d7c18-780e-43ae-aed0-9daf2939b823], with exception: java.lang.IllegalArgumentException: Input byte array has wrong 4-byte ending unit at java.util.Base64$Decoder...

  • 4603 Views
  • 2 replies
  • 0 kudos
Latest Reply
Annapurna_Hiriy
Databricks Employee
  • 0 kudos

The issue could be due to the mismatch in the eventHub jar and the dependencies added. Also, not all the required dependencies may be added.Suggestions:Using the azure_eventhubs_spark_2_12_.jar eventHub spark jar along with the following dependencies...

  • 0 kudos
1 More Replies
ravinchi
by New Contributor III
  • 4115 Views
  • 4 replies
  • 7 kudos

I'd like to ingest data into my ADLS from sql server in an incremental manner using Delta Live Tables.

I'd like to ingest data into my ADLS from sql server in an incremental manner using Delta Live Tables. I do not want to use any staging tables. I was using CDC, While I call dlt.apply_changes, its asking me to specify source and target. SInce source ...

  • 4115 Views
  • 4 replies
  • 7 kudos
Latest Reply
Sandeep
Contributor III
  • 7 kudos

If you have a CDC feed, looks like we can use this: https://docs.databricks.com/workflows/delta-live-tables/delta-live-tables-cdc.html

  • 7 kudos
3 More Replies
nagini_sitarama
by New Contributor III
  • 2977 Views
  • 3 replies
  • 2 kudos

Error while optimizing the table . Failure of InSet.sql for UTF8String collection

count of the table : 1125089 for october month data , So I am optimizing the table. optimize table where batchday >="2022-10-01" and batchday<="2022-10-31"I am getting error like : GC overhead limit exceeded    at org.apache.spark.unsafe.types.UTF8St...

image.png
  • 2977 Views
  • 3 replies
  • 2 kudos
Latest Reply
Priyanka_Biswas
Databricks Employee
  • 2 kudos

Hi @Nagini Sitaraman​ To understand the issue better I would like to get some more information. Does the error occur at the driver side or executor side? Can you please share the full error stack trace? You may need to check the spark UI to find wher...

  • 2 kudos
2 More Replies
Aviral-Bhardwaj
by Esteemed Contributor III
  • 21239 Views
  • 2 replies
  • 13 kudos

Understanding Rename in Databricks Now there are multiple ways to rename Spark Data Frame Columns or Expressions. We can rename columns or expressions...

Understanding Rename in DatabricksNow there are multiple ways to rename Spark Data Frame Columns or Expressions.We can rename columns or expressions using alias as part of selectWe can add or rename columns or expressions using withColumn on top of t...

  • 21239 Views
  • 2 replies
  • 13 kudos
Latest Reply
Ajay-Pandey
Esteemed Contributor III
  • 13 kudos

Very informative, Thanks for sharing

  • 13 kudos
1 More Replies
AlexDavies
by Contributor
  • 3469 Views
  • 2 replies
  • 2 kudos

Issue connecting to SQL warehouse spark thrift server

we have a library that allows dotnet applications to talk to databricks clusters (https://github.com/clearbank/SparkSqlClient). This communicates with the clusters over the spark thrift serverAlthough this works great for clusters in the "data scienc...

  • 3469 Views
  • 2 replies
  • 2 kudos
Latest Reply
AlexDavies
Contributor
  • 2 kudos

I have tried those connection details however it they give me 400 errors when trying to connect directly using the hive thrift server contract (https://github.com/apache/hive/blob/master/service-rpc/if/TCLIService.thrift). I do not get the issues whe...

  • 2 kudos
1 More Replies
cristianc
by Contributor
  • 1839 Views
  • 2 replies
  • 1 kudos

Unexpected workspace setup dialog in the account

Greetings,Recently we were doing cleanups in AWS and removed some Databricks related resources that were used only once for setting up our workspace and were not used since then.Since there is no plan to create any other workspaces the decision was t...

unexpected_workspace_create_dialog
  • 1839 Views
  • 2 replies
  • 1 kudos
Latest Reply
cristianc
Contributor
  • 1 kudos

The resources that were cleaned up were just the ones that were used for the initial setup of the workspace, everything else important for the day to day operation are in place and we are actively using the workspace, therefore there is no plan to de...

  • 1 kudos
1 More Replies
ftc
by New Contributor II
  • 1207 Views
  • 1 replies
  • 2 kudos

Can Databricks Certified Data Engineer Professional exam questions be short and easy to understand?

The Databricks Certified Data Engineer Professional exam most questions are too long for those English as second language. Not enough time to read through the questions and sometimes hard to comprehend

  • 1207 Views
  • 1 replies
  • 2 kudos
Latest Reply
eimis_pacheco
Contributor
  • 2 kudos

I strongly agree with you. There is not a Spanish version of this exam. Those exam are long even for native speakers just imagine for people with English as a second language. For instance, since Amazon does not have a Spanish version, they took this...

  • 2 kudos
BF
by New Contributor II
  • 6957 Views
  • 3 replies
  • 2 kudos

Resolved! Pyspark - How do I convert date/timestamp of format like /Date(1593786688000+0200)/ in pyspark?

Hi all, I've a dataframe with CreateDate column with this format:CreateDate/Date(1593786688000+0200)//Date(1446032157000+0100)//Date(1533904635000+0200)//Date(1447839805000+0100)//Date(1589451249000+0200)/and I want to convert that format to date/tim...

  • 6957 Views
  • 3 replies
  • 2 kudos
Latest Reply
Chaitanya_Raju
Honored Contributor
  • 2 kudos

Hi @Bruno Franco​ ,Can you please try the below code, hope it might for you.from pyspark.sql.functions import from_unixtime from pyspark.sql import functions as F final_df = df_src.withColumn("Final_Timestamp", from_unixtime((F.regexp_extract(col("Cr...

  • 2 kudos
2 More Replies
whh99
by New Contributor II
  • 2176 Views
  • 2 replies
  • 1 kudos

Given user id, what API can we use to find out which cluster the user is connected to?

I want to know the cluster that user is connected to in databricks. It would be great if we can also get the duration that the user is connected.

  • 2176 Views
  • 2 replies
  • 1 kudos
Latest Reply
daniel_sahal
Esteemed Contributor
  • 1 kudos

You can track activity logs by activating audit logs.I'm not sure which cloud provider you're using, but ex. for Azure you can find a manual here:https://learn.microsoft.com/en-us/azure/databricks/administration-guide/account-settings/audit-logs

  • 1 kudos
1 More Replies
SreedharVengala
by New Contributor III
  • 26577 Views
  • 10 replies
  • 7 kudos

PGP Encryption / Decryption in Databricks

Is there a way to Decrypt / Encrypt Blob files in Databricks using Key stored in Key Vault. What libraries need to be used? Any code snippets? Links?

  • 26577 Views
  • 10 replies
  • 7 kudos
Latest Reply
Anonymous
Not applicable
  • 7 kudos

I am looking for similar requirements to explore various options to encrypt/decrypt the ADLS data using ADB pyspark. Please share list of options available.

  • 7 kudos
9 More Replies
190809
by Contributor
  • 1062 Views
  • 1 replies
  • 1 kudos

What are the requirements in order for the event log to collect backlog metrics?

I am trying to use the event log to collect metrics on the 'flow_progess' under the 'event_type' field. In the the docs it suggests that this information may not be collected based on the data source and runtime used (see screenshot). Can anyone let ...

Screenshot 2022-12-07 at 11.30.43
  • 1062 Views
  • 1 replies
  • 1 kudos
Latest Reply
User16539034020
Databricks Employee
  • 1 kudos

Thanks for contacting Databricks Support! I understand that you're looking for information on unsupported data source types and runtimes for the backlog metrics. Unfortunately, we currently have not documented that information. It's possible that som...

  • 1 kudos
Ak3
by New Contributor III
  • 4513 Views
  • 5 replies
  • 7 kudos

Databricks ADLS vs Azure Sql ? which is better for datawarehousing ? and why

Databricks ADLS vs Azure Sql ? which is better for datawarehousing ? and why

  • 4513 Views
  • 5 replies
  • 7 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 7 kudos

Databricks is the data lake / lakehouse and Azure SQL is the database.

  • 7 kudos
4 More Replies
horatiug
by New Contributor III
  • 3459 Views
  • 4 replies
  • 1 kudos

Databricks workspace with custom VPC using terraform in Google Cloud

I am working on Google Cloud and want to create Databricks workspace with custom VPC using terraform. Is that supported ? If yes is it similar to AWS way ?Thank youHoratiu

  • 3459 Views
  • 4 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hi @horatiu guja​ GCP Workspace provisioning using Terraform is public preview now. Please refer to the below doc for the steps.https://registry.terraform.io/providers/databricks/databricks/latest/docs/guides/gcp-workspace

  • 1 kudos
3 More Replies
johnb1
by Contributor
  • 7064 Views
  • 4 replies
  • 0 kudos

SELECT from table saved under path

Hi!I saved a dataframe as a delta table with the following syntax:(test_df .write .format("delta") .mode("overwrite") .save(output_path) )How can I issue a SELECT statement on the table?What do I need to insert into [table_name] below?SELECT ...

  • 7064 Views
  • 4 replies
  • 0 kudos
Latest Reply
Ajay-Pandey
Esteemed Contributor III
  • 0 kudos

Hi @John B​ there is two way to access your delta table-SELECT * FROM delta.`your_delta_table_path`df.write.format("delta").mode("overwrite").option("path", "your_path").saveAsTable("table_name")Now you can use your select query-SELECT * FROM [table_...

  • 0 kudos
3 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels