cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

otum
by New Contributor II
  • 4215 Views
  • 6 replies
  • 0 kudos

[Errno 2] No such file or directory

I am reading a Json a file as in below location, using the below code,    file_path = "/dbfs/mnt/platform-data/temp/ComplexJSON/sample.json" # replace with the file path f = open(file_path, "r") print(f.read())     but it is failing for no such file...

otum_0-1704950000614.png otum_0-1704949958734.png
  • 4215 Views
  • 6 replies
  • 0 kudos
Latest Reply
Debayan
Databricks Employee
  • 0 kudos

Hi, As Shan mentioned, could you please cat the file and see if it exists. 

  • 0 kudos
5 More Replies
ac0
by Contributor
  • 7599 Views
  • 3 replies
  • 0 kudos

Resolved! Setting environment variables to use in a SQL Delta Live Table Pipeline

I'm trying to use the Global Init Scripts in Databricks to set an environment variable to use in a Delta Live Table Pipeline. I want to be able to reference a value passed in as a path versus hard coding it. Here is the code for my pipeline:CREATE ST...

  • 7599 Views
  • 3 replies
  • 0 kudos
Latest Reply
ac0
Contributor
  • 0 kudos

I was able to accomplish this by creating a Cluster Policy that put in place the scripts, config settings, and environment variables I needed.

  • 0 kudos
2 More Replies
ChrisS
by New Contributor III
  • 9785 Views
  • 7 replies
  • 8 kudos

How to get data scraped from the web into your data storage

I learning data bricks for the first time following the book that is copywrited in 2020 so I imagine it might be a little outdated at this point. What I am trying to do is move data from an online source (in this specific case using shell script but ...

  • 9785 Views
  • 7 replies
  • 8 kudos
Latest Reply
CharlesReily
New Contributor III
  • 8 kudos

In Databricks, you can install external libraries by going to the Clusters tab, selecting your cluster, and then adding the Maven coordinates for Deequ. This represents the best b2b data enrichment services in Databricks.In your notebook or script, y...

  • 8 kudos
6 More Replies
aockenden
by New Contributor III
  • 3593 Views
  • 2 replies
  • 0 kudos

Switching SAS Tokens Mid-Script With Spark Dataframes

Hey all, my team has settled on using directory-scoped SAS tokens to provision access to data in our Azure Gen2 Datalakes. However, we have encountered an issue when switching from a first SAS token (which is used to read a first parquet table in the...

  • 3593 Views
  • 2 replies
  • 0 kudos
Latest Reply
aockenden
New Contributor III
  • 0 kudos

Bump

  • 0 kudos
1 More Replies
pyter
by New Contributor III
  • 8102 Views
  • 5 replies
  • 2 kudos

Resolved! [13.3] Vacuum on table fails if shallow clone without write access exists

Hello everyone,We use unity catalog, separating our dev, test and prod data into individual catalogs.We run weekly vacuums on our prod catalog using a service principal that only has (read+write) access to this production catalog, but no access to ou...

  • 8102 Views
  • 5 replies
  • 2 kudos
Latest Reply
Lakshay
Databricks Employee
  • 2 kudos

Are you using Unity Catalog in single user access mode? If yes, could you try using shared access mode.

  • 2 kudos
4 More Replies
pawelzak
by Databricks Partner
  • 3816 Views
  • 2 replies
  • 1 kudos

Dashboard update through API

Hi,I would like to create / update dashboard definition based on the json file. How can one do it? I tried the following:databricks api post /api/2.0/preview/sql/dashboards/$dashboard_id --json @file.json  But it does not update the widgets...How can...

  • 3816 Views
  • 2 replies
  • 1 kudos
Latest Reply
Gamlet
New Contributor II
  • 1 kudos

To programmatically create/update dashboards in Databricks using a JSON file, you can use the Databricks REST API's workspace/export and workspace/import endpoints. Generate a JSON representation of your dashboard using workspace/export, modify it as...

  • 1 kudos
1 More Replies
israelst
by New Contributor III
  • 4761 Views
  • 3 replies
  • 1 kudos

structured streaming schema inference

I want to stream data from kinesis using DLT. the Data is in json format. How can I use structured streaming to automatically infer the schema? I know auto-loader has this feature but it doesn't make sense for me to use autoloader since my data is st...

  • 4761 Views
  • 3 replies
  • 1 kudos
Latest Reply
israelst
New Contributor III
  • 1 kudos

I wanted to use Databricks for this. I don't want to depend on AWS Glue. Same way I could do it with AutoLoader...

  • 1 kudos
2 More Replies
seefoods
by Valued Contributor
  • 2949 Views
  • 3 replies
  • 1 kudos

Resolved! cluster metrics databricks runtime 13.1

hello everyone, how to collect metrics provided by clusters metrics databricks runtime 13.1 using bash script

  • 2949 Views
  • 3 replies
  • 1 kudos
Latest Reply
User16539034020
Databricks Employee
  • 1 kudos

hi, @Aubert: Currently, you could only use static downloadable snapshots.https://docs.databricks.com/en/compute/cluster-metrics.html Regards,

  • 1 kudos
2 More Replies
Simha
by New Contributor II
  • 2806 Views
  • 1 replies
  • 1 kudos

How to write only file on to the Blob or ADLS from Databricks?

Hi All,I am trying to write a csv file on to the blob and ADLS from databricks notebook using pyspark and a separate folder is created with the mentioned filename and a partition is created within the folder.I want only file to be written. Can anyone...

  • 2806 Views
  • 1 replies
  • 1 kudos
Latest Reply
Lakshay
Databricks Employee
  • 1 kudos

Hi @Simha , This is expected behavior. Spark always creates an output directory when writing the data and it divides the result into multiple part files. This is because multiple executors write the result into the output directory. We cannot make th...

  • 1 kudos
GeorgeD
by New Contributor
  • 1736 Views
  • 0 replies
  • 0 kudos

Uncaught Error: Script error for jupyter-widgets/base

I have been using ipywidgets for quite a while in several notebooks in Databricks, but today things have completely stopped working with the following error;Uncaught Error: Script error for "@jupyter-widgets/base" http://requirejs.org/docs/errors.htm...

  • 1736 Views
  • 0 replies
  • 0 kudos
Mr__D
by New Contributor II
  • 30489 Views
  • 7 replies
  • 1 kudos

Resolved! Writing modular code in Databricks

Hi All, Could you please suggest to me the best way to write PySpark code in Databricks,I don't want to write my code in Databricks notebook but create python files(modular project) in Vscode and call only the primary function in the notebook(the res...

  • 30489 Views
  • 7 replies
  • 1 kudos
Latest Reply
Gamlet
New Contributor II
  • 1 kudos

Certainly! To write PySpark code in Databricks while maintaining a modular project in VSCode, you can organize your PySpark code into Python files in VSCode, with a primary function encapsulating the main logic. Then, upload these files to Databricks...

  • 1 kudos
6 More Replies
Danielsg94
by Databricks Partner
  • 40040 Views
  • 5 replies
  • 1 kudos

Resolved! How can I write a single file to a blob storage using a Python notebook, to a folder with other data?

When I use the following code: df .coalesce(1) .write.format("com.databricks.spark.csv") .option("header", "true") .save("/path/mydata.csv")it writes several files, and when used with .mode("overwrite"), it will overwrite everything in th...

  • 40040 Views
  • 5 replies
  • 1 kudos
Latest Reply
Simha
New Contributor II
  • 1 kudos

Hi Daniel,May I know, how did you fix this issue. I am facing similar issue while writing csv/parquet to blob/adls, it creates a separate folder with the filename and creates a partition file within that folder.I need to write just a file on to the b...

  • 1 kudos
4 More Replies
Cas
by New Contributor III
  • 3693 Views
  • 1 replies
  • 1 kudos

Asset Bundles: Dynamic job cluster insertion in jobs

Hi!As we are migrating from dbx to asset bundles we are running into some problems with the dynamic insertion of job clusters in the job definition as with dbx we did this nicely with jinja and defined all the clusters in one place and a change in th...

Data Engineering
asset bundles
jobs
  • 3693 Views
  • 1 replies
  • 1 kudos
krocodl
by Contributor
  • 5548 Views
  • 2 replies
  • 0 kudos

Resolved! Thread leakage when connection cannot be established

During the execution of the next code we can observe a lost thread that will never end:@Testpublic void pureConnectionErrorTest() throws Exception { try { DriverManager.getConnection(DATABRICKS_JDBC_URL, DATABRICKS_USERNAME, DATABRICKS_PASS...

Data Engineering
JDBC
resource leaking
threading
  • 5548 Views
  • 2 replies
  • 0 kudos
Latest Reply
krocodl
Contributor
  • 0 kudos

This issue is reported as fixed since v2.6.34. I validated version 2.6.36- it works normal. Many thanks to the developers for the work done!

  • 0 kudos
1 More Replies
rt-slowth
by Contributor
  • 1688 Views
  • 1 replies
  • 0 kudos

Delta Live Table streaming pipeline

How do I do a simple left join of a static table and a streaming table under catalog in the streaming pipeline of a Delta Live Table?

  • 1688 Views
  • 1 replies
  • 0 kudos
Latest Reply
Priyanka_Biswas
Databricks Employee
  • 0 kudos

Hi @rt-slowth I would like to share with you the Databricks documentation, which contains details about stream-static table joins https://docs.databricks.com/en/delta-live-tables/transform.html#stream-static-joins Stream-static joins are a good choic...

  • 0 kudos
Labels