cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

ftc
by New Contributor II
  • 557 Views
  • 1 replies
  • 2 kudos

Can Databricks Certified Data Engineer Professional exam questions be short and easy to understand?

The Databricks Certified Data Engineer Professional exam most questions are too long for those English as second language. Not enough time to read through the questions and sometimes hard to comprehend

  • 557 Views
  • 1 replies
  • 2 kudos
Latest Reply
eimis_pacheco
Contributor
  • 2 kudos

I strongly agree with you. There is not a Spanish version of this exam. Those exam are long even for native speakers just imagine for people with English as a second language. For instance, since Amazon does not have a Spanish version, they took this...

  • 2 kudos
jonathan-dufaul
by Valued Contributor
  • 1419 Views
  • 4 replies
  • 5 kudos

Why is writing to MSSQL Server 12.0 so slow directly from spark but nearly instant when I write to a csv and read it back

I have a dataframe that inexplicably takes forever to write to an MS SQL Server even though other dataframes, even much larger ones, write nearly instantly. I'm using this code:my_dataframe.write.format("jdbc") .option("url",sqlsUrl) .optio...

  • 1419 Views
  • 4 replies
  • 5 kudos
Latest Reply
yueyue_tang
New Contributor II
  • 5 kudos

I meet the same problem and I don't know how to write dataFrame to MS sql server quickly​

  • 5 kudos
3 More Replies
BF
by New Contributor II
  • 3136 Views
  • 3 replies
  • 2 kudos

Resolved! Pyspark - How do I convert date/timestamp of format like /Date(1593786688000+0200)/ in pyspark?

Hi all, I've a dataframe with CreateDate column with this format:CreateDate/Date(1593786688000+0200)//Date(1446032157000+0100)//Date(1533904635000+0200)//Date(1447839805000+0100)//Date(1589451249000+0200)/and I want to convert that format to date/tim...

  • 3136 Views
  • 3 replies
  • 2 kudos
Latest Reply
Chaitanya_Raju
Honored Contributor
  • 2 kudos

Hi @Bruno Franco​ ,Can you please try the below code, hope it might for you.from pyspark.sql.functions import from_unixtime from pyspark.sql import functions as F final_df = df_src.withColumn("Final_Timestamp", from_unixtime((F.regexp_extract(col("Cr...

  • 2 kudos
2 More Replies
whh99
by New Contributor II
  • 978 Views
  • 3 replies
  • 1 kudos

Given user id, what API can we use to find out which cluster the user is connected to?

I want to know the cluster that user is connected to in databricks. It would be great if we can also get the duration that the user is connected.

  • 978 Views
  • 3 replies
  • 1 kudos
Latest Reply
Kaniz
Community Manager
  • 1 kudos

Hi @Hui Hui Wong​  (Customer)​, We haven’t heard from you since the last response from @Daniel Sahal​ (Customer)​ , and I was checking back to see if his suggestions helped you.Or else, If you have any solution, please share it with the community, as...

  • 1 kudos
2 More Replies
SreedharVengala
by New Contributor III
  • 13501 Views
  • 18 replies
  • 9 kudos

PGP Encryption / Decryption in Databricks

Is there a way to Decrypt / Encrypt Blob files in Databricks using Key stored in Key Vault. What libraries need to be used? Any code snippets? Links?

  • 13501 Views
  • 18 replies
  • 9 kudos
Latest Reply
Anonymous
Not applicable
  • 9 kudos

I am looking for similar requirements to explore various options to encrypt/decrypt the ADLS data using ADB pyspark. Please share list of options available.

  • 9 kudos
17 More Replies
190809
by Contributor
  • 438 Views
  • 1 replies
  • 1 kudos

What are the requirements in order for the event log to collect backlog metrics?

I am trying to use the event log to collect metrics on the 'flow_progess' under the 'event_type' field. In the the docs it suggests that this information may not be collected based on the data source and runtime used (see screenshot). Can anyone let ...

Screenshot 2022-12-07 at 11.30.43
  • 438 Views
  • 1 replies
  • 1 kudos
Latest Reply
User16539034020
Contributor II
  • 1 kudos

Thanks for contacting Databricks Support! I understand that you're looking for information on unsupported data source types and runtimes for the backlog metrics. Unfortunately, we currently have not documented that information. It's possible that som...

  • 1 kudos
Ak3
by New Contributor III
  • 1761 Views
  • 5 replies
  • 6 kudos

Databricks ADLS vs Azure Sql ? which is better for datawarehousing ? and why

Databricks ADLS vs Azure Sql ? which is better for datawarehousing ? and why

  • 1761 Views
  • 5 replies
  • 6 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 6 kudos

Databricks is the data lake / lakehouse and Azure SQL is the database.

  • 6 kudos
4 More Replies
hanish
by New Contributor II
  • 1415 Views
  • 3 replies
  • 2 kudos

Job cluster support in jobs/runs/submit API

We are using jobs/runs/submit API of databricks to create and trigger a one-time run with new_cluster and existing_cluster configuration. We would like to check if there is provision to pass "job_clusters" in this API to reuse the same cluster across...

  • 1415 Views
  • 3 replies
  • 2 kudos
Latest Reply
Anonymous
Not applicable
  • 2 kudos

@Hanish Bansal​ Shared job cluster for  jobs/runs/submit API is not supported at the moment.

  • 2 kudos
2 More Replies
horatiug
by New Contributor III
  • 1595 Views
  • 5 replies
  • 1 kudos

Databricks workspace with custom VPC using terraform in Google Cloud

I am working on Google Cloud and want to create Databricks workspace with custom VPC using terraform. Is that supported ? If yes is it similar to AWS way ?Thank youHoratiu

  • 1595 Views
  • 5 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hi @horatiu guja​ GCP Workspace provisioning using Terraform is public preview now. Please refer to the below doc for the steps.https://registry.terraform.io/providers/databricks/databricks/latest/docs/guides/gcp-workspace

  • 1 kudos
4 More Replies
johnb1
by New Contributor III
  • 2909 Views
  • 4 replies
  • 0 kudos

SELECT from table saved under path

Hi!I saved a dataframe as a delta table with the following syntax:(test_df .write .format("delta") .mode("overwrite") .save(output_path) )How can I issue a SELECT statement on the table?What do I need to insert into [table_name] below?SELECT ...

  • 2909 Views
  • 4 replies
  • 0 kudos
Latest Reply
Ajay-Pandey
Esteemed Contributor III
  • 0 kudos

Hi @John B​ there is two way to access your delta table-SELECT * FROM delta.`your_delta_table_path`df.write.format("delta").mode("overwrite").option("path", "your_path").saveAsTable("table_name")Now you can use your select query-SELECT * FROM [table_...

  • 0 kudos
3 More Replies
xiaochong
by New Contributor III
  • 528 Views
  • 1 replies
  • 2 kudos

Is Delta Live Tables planned to be open source in the future?

Is Delta Live Tables planned to be open source in the future?

  • 528 Views
  • 1 replies
  • 2 kudos
Latest Reply
Priyanka_Biswas
Valued Contributor
  • 2 kudos

Hello there @G Z​  I would say "we have a history of open sourcing our biggest innovations but there's no concrete timeline for dlt. It's built on the open APIs of spark and delta, so the most important parts (your transformation logic and you data) ...

  • 2 kudos
joakon
by New Contributor III
  • 1443 Views
  • 4 replies
  • 3 kudos

Resolved! Databricks - Workflow- Jobs- Script to automate

Hi - I have created a Databricks job - under Workflow - its running fine without any issues . I would like to promote this job to other workspaces using a script.Is there a way to script the job definition and deploy it across multiple workspaces .I ...

  • 1443 Views
  • 4 replies
  • 3 kudos
Latest Reply
joakon
New Contributor III
  • 3 kudos

thank you @Landan George​ 

  • 3 kudos
3 More Replies
Dbks_Community
by New Contributor II
  • 857 Views
  • 2 replies
  • 0 kudos

Cross region Databricks to SQL Connection

We are trying to connect Azure Databricks Cluster to Azure SQL database but the firewalls at SQL level is causing an issue.Whitelisting dbks subnet is not an option here as both the resources are in two different azure regions. Is there a secure way ...

  • 857 Views
  • 2 replies
  • 0 kudos
Latest Reply
Cedric
Valued Contributor
  • 0 kudos

Hi @Timir Ranjan​,Have you tried looking into private endpoints? This allows you to expose your Azure SQL database from the Azure backbone and is cross-regional supported.https://learn.microsoft.com/en-us/azure/private-link/private-endpoint-overviewP...

  • 0 kudos
1 More Replies
StevenW
by New Contributor III
  • 2560 Views
  • 10 replies
  • 0 kudos

Resolved! Large MERGE Statements - 500+ lines of code!

I'm new to databricks. (Not new to DB's - 10+ year DB Developer).How do you generate a MERGE statement in DataBricks? Trying to manually maintain a 500+ or 1000+ lines in a MERGE statement doesn't make much sense? Working with Large Tables of between...

  • 2560 Views
  • 10 replies
  • 0 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 0 kudos

In my opinion, when possible MERGE statement should be on the primary key. If not possible you can create your own unique key (by concatenate some fields and eventually hashing them) and then use it in merge logic.

  • 0 kudos
9 More Replies
KVNARK
by Honored Contributor II
  • 1448 Views
  • 5 replies
  • 7 kudos

Resolved! SQL error while executing

any fixes to the error would be much appreciated

image
  • 1448 Views
  • 5 replies
  • 7 kudos
Latest Reply
Ajay-Pandey
Esteemed Contributor III
  • 7 kudos

Hi @KVNARK .​ Could you please send the query that you are executing, that will help me to debug the error.

  • 7 kudos
4 More Replies
Labels
Top Kudoed Authors