cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

shri0509
by New Contributor II
  • 1337 Views
  • 5 replies
  • 1 kudos

How to avoid iteration/loop in databricks in the given scenario

Hi all, I need your input.I am new to Databricks and working with a dataset that consists of around 10,000 systems, each containing approximately 100 to 150 parts. These parts have attributes such as name, version, and serial number. The dataset size...

Data Engineering
data engineering
  • 1337 Views
  • 5 replies
  • 1 kudos
Latest Reply
AnnieWhite
New Contributor II
  • 1 kudos

Thank you so much for the link.

  • 1 kudos
4 More Replies
Tico23
by Contributor
  • 14590 Views
  • 12 replies
  • 10 kudos

Connecting SQL Server (on-premise) to Databricks via jdbc:sqlserver

Is it possible to connect to SQL Server on-premise (Not Azure) from Databricks?I tried to ping my virtualbox VM (with Windows Server 2022) from within Databricks and the request timed out.%sh   ping 122.138.0.14This is what my connection might look l...

  • 14590 Views
  • 12 replies
  • 10 kudos
Latest Reply
BharathKumarS
New Contributor II
  • 10 kudos

I tried to connect to localhost sql server through databricks community edition, but it failed. I have created an IP rule on port 1433 allowed inbound connection from all public network, but still didn't connect. I tried locally using python its work...

  • 10 kudos
11 More Replies
guangyi
by Contributor III
  • 977 Views
  • 4 replies
  • 2 kudos

Resolved! How to create a DLT pipeline with SQL statement

I need a DLT pipeline to create a materialized view for fetching event logs. All the ways below I tried are failed:Attach a notebook with pure SQL inside: No magic cell like `%sql` are failedAttach a notebook with `spark.sql` python code: Failed beca...

  • 977 Views
  • 4 replies
  • 2 kudos
Latest Reply
guangyi
Contributor III
  • 2 kudos

After just finishing my last reply, I realized what’s wrong with my code: I should use “file” property instead of “notebook” in the libraries section.It works now. Thank you guys, you are my rubber duck!

  • 2 kudos
3 More Replies
MarkV
by New Contributor II
  • 632 Views
  • 1 replies
  • 0 kudos

Getting PermissionDenied in SDK When Updating External Location Isolation Mode

Using the Databricks SDK for Python in a notebook in a Databricks workspace, I'm creating an external location and then attempting to update the isolation mode and workspace bindings associated with the external location. The step to create the exter...

  • 632 Views
  • 1 replies
  • 0 kudos
Latest Reply
MarkV
New Contributor II
  • 0 kudos

Let me clean-up these cells for better readability:%pip install databricks-sdk --upgradedbutils.library.restartPython()from databricks.sdk import WorkspaceClient from databricks.sdk.service import catalogw = WorkspaceClient()# This works without issu...

  • 0 kudos
Agus1
by New Contributor III
  • 3401 Views
  • 2 replies
  • 0 kudos

Obtain the source table version number from checkpoint file when using Structured Streaming

Hello!I'm using Structured Streaming to write to a delta table. The source is another delta table written with Structured Streaming as well. In order to datacheck the results I'm attempting to obtain from the checkpoint files of the target table the ...

  • 3401 Views
  • 2 replies
  • 0 kudos
Latest Reply
Agus1
New Contributor III
  • 0 kudos

Hello @Retired_mod, thank you for your answer.I'm a bit confused here because you seem to be describing the opposite behavior of what I've seen in our checkpoint files.Here I repost my examples to try to understand better.First checkpoint file:{"sour...

  • 0 kudos
1 More Replies
ae20cg
by New Contributor III
  • 3917 Views
  • 5 replies
  • 9 kudos

Databricks Cluster Web terminal different permissions with tmux and xterm.

I am launching web terminal on my databricks cluster and when I am using the ephemeral xterm instance I am easily able to navigate to desired directory in `Workspace` and run anything... for example `ls ./` When I switch to tmux so that I can preserv...

  • 3917 Views
  • 5 replies
  • 9 kudos
Latest Reply
alenka
New Contributor III
  • 9 kudos

Hey there, fellow data explorer pals! I totally get your excitement when launching that web terminal on your Databricks cluster and feeling the power of running commands like 'ls ./' in the ephemeral xterm instance. It's like traversing the vast univ...

  • 9 kudos
4 More Replies
kranthi2
by New Contributor III
  • 746 Views
  • 2 replies
  • 2 kudos

Resolved! alter DLT Materialized View alter column set MASK

I am trying to mask a column on a DLT materialized view - this is created using DLT syntax. I am not able set the column masking after creation. Appreciate any workaround.alter DLT Materialized View alter column set MASK

  • 746 Views
  • 2 replies
  • 2 kudos
Latest Reply
kranthi2
New Contributor III
  • 2 kudos

Thank you. I will submit the idea.

  • 2 kudos
1 More Replies
prasadvaze
by Valued Contributor II
  • 21793 Views
  • 15 replies
  • 12 kudos

Resolved! How to query delta lake using SQL desktop tools like SSMS or DBVisualizer

Is there a way to use sql desktop tools? because delta OSS or databricks does not provide desktop client (similar to azure data studio) to browse and query delta lake objects.I currently use databricks SQL , a webUI in the databricks workspace but se...

  • 21793 Views
  • 15 replies
  • 12 kudos
Latest Reply
prasadvaze
Valued Contributor II
  • 12 kudos

DSR is Delta Standalone Reader. see more here - https://docs.delta.io/latest/delta-standalone.htmlIts a crate (and also now a py library) that allows you to connect to delta tables without using spark (e.g. directly from python and not using pyspa...

  • 12 kudos
14 More Replies
oleh_v
by New Contributor
  • 509 Views
  • 2 replies
  • 0 kudos

Upload of .bin file >400mb

I try to upload to local workspace folder with .bin extension.It is required to have it locally.I tried load from DBFS, but loading files over 265mb is not allowed with cluster. I tried to upload manually but failed with same error "OSError: [Errno5]...

  • 509 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kartheek_Katta
New Contributor II
  • 0 kudos

Hello Slash,Thank you for your response. I'm encountering the same issue as described. I tried running the provided code in my Databricks workspace, but I received an error. My question is how the script is expected to access local files, especially ...

  • 0 kudos
1 More Replies
Meghana_Vasavad
by New Contributor III
  • 970 Views
  • 4 replies
  • 0 kudos

Resolved! Discrepancies in Data Engineering GitHub Repositories and Errors in Databricks Notebooks

Hi team,I am writing to express my concerns regarding the recent Databricks webinar on "Data Intelligence with Databricks". During the session, there was mention of two GitHub repositories associated with the notebooks being used as mentioned below.1...

  • 970 Views
  • 4 replies
  • 0 kudos
Latest Reply
Sujitha
Databricks Employee
  • 0 kudos

Hi @Meghana_Vasavad could you please file a ticket with Databricks SupportThey will help you with this request. 

  • 0 kudos
3 More Replies
raghu2
by New Contributor III
  • 2957 Views
  • 5 replies
  • 0 kudos

Resolved! Error deploying a DAB

I followed steps listed in this article.After creating and validation of bundle with default template, during deployment using this command:databricks bundle deploy -t dev --profile zzI get this message:Building mySecPrj...Error: build failed mySecPr...

  • 2957 Views
  • 5 replies
  • 0 kudos
Latest Reply
ADB0513
New Contributor III
  • 0 kudos

@daniel_sahal I am receiving the same error and tried your solution and am still getting the invalid command bdist_wheel.  Any other suggestions?  Thanks

  • 0 kudos
4 More Replies
brickster_2018
by Databricks Employee
  • 13945 Views
  • 3 replies
  • 0 kudos
  • 13945 Views
  • 3 replies
  • 0 kudos
Latest Reply
Hugh_Ku
New Contributor II
  • 0 kudos

I've also run into the same issue, customised docker image does not give DATABRICKS_RUNTIME_VERSION as env. I believe there are still many issues in how customised docker image is used in databricks cluster.Can anyone from databricks help answer it?

  • 0 kudos
2 More Replies
varshini_reddy
by New Contributor III
  • 1020 Views
  • 6 replies
  • 0 kudos

Databricks UC enabled but Lineage not found for one table

Databricks UC enabled but Lineage not found for one table whereas i can see the lineage for the other two, any idea on why is it?. Im performing few transformations to bronze data , taking good_data_transformed as a dataframe, creating a temp view fo...

  • 1020 Views
  • 6 replies
  • 0 kudos
Latest Reply
filipniziol
Contributor III
  • 0 kudos

It is because of using temp view. To debug further you would need to write all the source tables, transformations, target tables, actual lineage and expected lineage, but as a rule of thumb if the lineage is lost when using temp view.Lineage is captu...

  • 0 kudos
5 More Replies
karolinalbinsso
by New Contributor II
  • 3089 Views
  • 2 replies
  • 3 kudos

Resolved! How to access the job-Scheduling Date from within the notebook?

I have created a job that contains a notebook that reads a file from Azure Storage. The file-name contains the date of when the file was transferred to the storage. A new file arrives every Monday, and the read-job is scheduled to run every Monday. I...

  • 3089 Views
  • 2 replies
  • 3 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 3 kudos

Hi, I guess the files are in the same directory structure so that you can use cloud files autoloader. It will incrementally read only new files https://docs.microsoft.com/en-us/azure/databricks/spark/latest/structured-streaming/auto-loaderSo it will ...

  • 3 kudos
1 More Replies
csmcpherson
by New Contributor III
  • 607 Views
  • 1 replies
  • 1 kudos

Resolved! Workflow file watch - capture filename trigger

With respect to the file watch trigger in workflows, how can we capture what files and or path was identified as raising the trigger?  I'd like to use this information to set parameters based upon the file name and the file path  Thank you!  https://...

  • 607 Views
  • 1 replies
  • 1 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 1 kudos

Hi @csmcpherson ,This is currently not supported, but databricks team is working on that idea according to below thread:Solved: File information is not passed to trigger job on f... - Databricks Community - 39266As a workaround, if you use autoloader...

  • 1 kudos

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels