cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Mado
by Valued Contributor II
  • 13662 Views
  • 3 replies
  • 0 kudos

Resolved! How to enforce delta table column to have unique values?

Hi,I have defined a delta table with a primary key:%sql   CREATE TABLE IF NOT EXISTS test_table_pk ( table_name STRING NOT NULL, label STRING NOT NULL, table_location STRING NOT NULL,   CONSTRAINT test_table_pk_col PRIMARY KEY(table_name) ...

image
  • 13662 Views
  • 3 replies
  • 0 kudos
Latest Reply
Steve_Lyle_BPCS
New Contributor II
  • 0 kudos

I'm with you.  But it DOES make sense because DBx databases are not application databases.  DBx is not intended to be used like this.  DBx databases are repositories for any ingested abstract data.  To manage the ingestion is purpose-built databases ...

  • 0 kudos
2 More Replies
learning_1989
by New Contributor II
  • 2309 Views
  • 2 replies
  • 1 kudos

You have json file which is nested with multiple key value pair how you read it in databricks?

You have json file which is nested with multiple key value pair how you read it in databricks?

  • 2309 Views
  • 2 replies
  • 1 kudos
Latest Reply
Lakshay
Databricks Employee
  • 1 kudos

You should be able to read the json file with below code. val df = spark.read.format("json").load("file.json") After this you will need to use the explode function to add columns to the dataframe using the nested values.

  • 1 kudos
1 More Replies
RKNutalapati
by Valued Contributor
  • 2680 Views
  • 3 replies
  • 0 kudos

How to use Oracle Wallet to connect from databricks

How to onnect Databricks to Oracle DAS / Autonomous Database using a cloud wallet, What are the typical steps and best practices to follow. Appreciate an example code snippet for connecting to the above data source

  • 2680 Views
  • 3 replies
  • 0 kudos
Latest Reply
RKNutalapati
Valued Contributor
  • 0 kudos

 Followed below steps to build the connection.Unzip Oracle Wallet objects and copy them to a secure location accessible by your Databricks workspace.Collaborate with your Network team and Oracle Autonomous Instance Admins to open firewalls between yo...

  • 0 kudos
2 More Replies
Snentley
by New Contributor
  • 1548 Views
  • 1 replies
  • 0 kudos

Free Voucher for Data Engineering Associate Certification

Could you please inform me which specific webinar participation might grant eligibility for a certification exam voucher? Additionally, I would like to know whether this voucher would cover the full cost of the certification exam or only a partial am...

  • 1548 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kiv9
New Contributor II
  • 0 kudos

Did you get any response on this  

  • 0 kudos
Phani1
by Valued Contributor II
  • 672 Views
  • 1 replies
  • 0 kudos

Databricks masking

Should we convert the Python-based masking logic to SQL in databricks for implementing masking? Will the masking feature continue to work while connected to Power BI?Regards,Phanindra

  • 672 Views
  • 1 replies
  • 0 kudos
Latest Reply
shan_chandra
Databricks Employee
  • 0 kudos

@Phani1 - could you please be more precise on the question. Are you discussing about mask  function in DBSQL?

  • 0 kudos
amama
by New Contributor II
  • 2446 Views
  • 3 replies
  • 1 kudos

How to run spark sql file through Azure Databricks

We have a process that will write spark sql to a file, this process will generate thousands of spark sql files in the production environment.These files will be created in the ADLS Gen2 directory.sample spark file---val 2023_I = spark.sql("select rm....

  • 2446 Views
  • 3 replies
  • 1 kudos
Latest Reply
shan_chandra
Databricks Employee
  • 1 kudos

@amama - you can mount the ADLS storage location in databricks. Since, this is a scala code, you can use workflow and create tasks to execute these scala code by providing the input as the mount location. 

  • 1 kudos
2 More Replies
marcusmv
by New Contributor II
  • 2521 Views
  • 2 replies
  • 1 kudos

Resolved! Advanced Data Engineering with Databricks course

I'm looking for materials to prepare for the Databricks Certified Professional Data Engineer exam. But I see two courses titled 'Advanced Data Engineering with Databricks' in the academy (E-VDG8QV andE-19WXD1). Which one of these courses should I be ...

Data Engineering
associate
exam
learning
professional
  • 2521 Views
  • 2 replies
  • 1 kudos
Latest Reply
marcusmv
New Contributor II
  • 1 kudos

Does anyone know? Would much appreciate it.

  • 1 kudos
1 More Replies
vpaluch
by New Contributor II
  • 4228 Views
  • 1 replies
  • 0 kudos

External Table from partitioned CSV in Unity Catalog.

When I create an External Table in unity catalog from a flattened csv folder, it  works as expected:     CREATE EXTERNAL LOCATION IF NOT EXISTS raw_data URL 'abfss://raw@storage0account0name.dfs.core.windows.net' WITH ( STORAGE CREDENTIAL `a579a...

Data Engineering
Partitioned_CSV
  • 4228 Views
  • 1 replies
  • 0 kudos
Latest Reply
vpaluch
New Contributor II
  • 0 kudos

Thanks Kaniz,I'm using an External Location authenticated using a Managed Identity. The very same used for the non-partitioned table and many others that works pretty fine. This account has Storage Blob Contributor rights for all containers and folde...

  • 0 kudos
Etyr
by Contributor
  • 5585 Views
  • 3 replies
  • 1 kudos

databricks.sql.exc.RequestError OpenSession error None

I'm trying to access to a Databricks SQL Warehouse with python. I'm able to connect with a token on a Compute Instance on Azure Machine Learning. It's a VM with conda installed, I create an env in python 3.10.from databricks import sql as dbsql dbsq...

  • 5585 Views
  • 3 replies
  • 1 kudos
Latest Reply
Etyr
Contributor
  • 1 kudos

The issue was that the new version of databricks-sql-connector (3.0.1) does not handle well error messages. So It gave a generic error and a timeout where it should have given me 403 and instant error message without a 900 second timeout.https://gith...

  • 1 kudos
2 More Replies
Rishabh-Pandey
by Esteemed Contributor
  • 3194 Views
  • 3 replies
  • 5 kudos

www.linkedin.com

woahhh #Excel plug in for #DeltaSharing.Now I can import delta tables directly into my spreadsheet using Delta Sharing.It puts the power of #DeltaLake into the hands of millions of business users.What does this mean?Imagine a data provider delivering...

  • 3194 Views
  • 3 replies
  • 5 kudos
Latest Reply
udit02
New Contributor II
  • 5 kudos

If you have any uncertainties, feel free to inquire here or connect with me on my LinkedIn profile for further assistance.https://whatsgbpro.org/

  • 5 kudos
2 More Replies
ShankarReddy
by New Contributor II
  • 1033 Views
  • 1 replies
  • 0 kudos

XML Unmarshalling using JAXB from JavaRDD

I have a JavRDD with complex nested xml content that I want to unmarshall using JAXB and get the data in to java objects. Can anyone please help with how can I achieve?Thanks

Data Engineering
java
java spark xml jaxb
jaxb
spark
XML
  • 1033 Views
  • 1 replies
  • 0 kudos
Latest Reply
ShankarReddy
New Contributor II
  • 0 kudos

I hope this should workJavaPairRDD<String, PortableDataStream> jrdd = javaSparkContext.binaryFiles("<path_to_file>");Map<String, PortableDataStream> mp = jrdd.collectAsMap();OutputStream os = new FileOutputStream(f);mp.values().forEach(pd -> { try...

  • 0 kudos
JensH
by New Contributor III
  • 9561 Views
  • 3 replies
  • 3 kudos

Resolved! How to pass parameters to a "Job as Task" from code?

Hi,I would like to use the new "Job as Task" feature but Im having trouble to pass values.ScenarioI have a workflow job which contains 2 tasks.Task_A (type "Notebook"): Read data from a table and based on the contents decide, whether the workflow in ...

Data Engineering
job
parameters
workflow
  • 9561 Views
  • 3 replies
  • 3 kudos
Latest Reply
Walter_C
Databricks Employee
  • 3 kudos

I found the following information: value is the value for this task value’s key. This command must be able to represent the value internally in JSON format. The size of the JSON representation of the value cannot exceed 48 KiB.You can refer to https...

  • 3 kudos
2 More Replies
Alessandro
by New Contributor
  • 1214 Views
  • 1 replies
  • 0 kudos

Update jobs parameter, when running, from API

Hi, When a Job is running, I would like to change the parameters with an API call.I know that I can set parameters value from API when I start a job from API, or that I can update the default value if the job isn't running, but I didn't find an API c...

  • 1214 Views
  • 1 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

No, there is currently no option to change parameters while the job is running, from the UI you will be able to modify them but it wont affect the current run, it will be applied on the new job runs you trigger. 

  • 0 kudos
chandraprakash
by New Contributor
  • 1737 Views
  • 2 replies
  • 0 kudos

Find the size of delta table for each month before partition

We have 38 delta tables. We decided to do partition the delta tables for each month.But we have some small tables as well. So we need find the size of delta tables for each month. So that we can use either partition or Z-orderIs there a way to find t...

Data Engineering
delta
delta_partitions
  • 1737 Views
  • 2 replies
  • 0 kudos
Latest Reply
dennyglee
Databricks Employee
  • 0 kudos

For your tables, I’m curious if you could utilize Liquid Clustering to reduce some of the maintenance issues relating to choosing Z-Order vs. partitioning.   Saying this, one potential way is to read the Delta transaction log and read the Add Info st...

  • 0 kudos
1 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels