cancel
Showing results for 
Search instead for 
Did you mean: 
Community Platform Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

dhanshri
by New Contributor
  • 1755 Views
  • 1 replies
  • 0 kudos

Tracking File Arrivals in Nested Folders Using Databricks File Arrival Trigger

Hi Team,I'm currently exploring a file arrival trigger with Data-bricks, but my data is organized into nested folders representing various sources. For instance: source1  |-- file1       |-- file.csv  |-- file2       |-- file.csv   My goal is to dete...

Community Platform Discussions
Azure Databricks
Databricks
  • 1755 Views
  • 1 replies
  • 0 kudos
Latest Reply
adriennn
Contributor II
  • 0 kudos

@Retired_mod did a LLM Bot write the above response for you? You link to a Stackoverflow post which uses Azure Data Factory, and you text contains concepts which do not apply to Databricks ("Use a lookup activity or a Get Metadata Activity to fetch t...

  • 0 kudos
dataVaughan
by New Contributor II
  • 3134 Views
  • 3 replies
  • 0 kudos

Notebook Dashboard to html to pdf issues

I have created a dashboard using the notebook dashboard interface, rather than the SQL warehouse dashboards. This means that the tables and visualizations on the dashboard, as well as the dashboard itself, are directly tied to a notebook and the outp...

  • 3134 Views
  • 3 replies
  • 0 kudos
Latest Reply
shan_chandra
Databricks Employee
  • 0 kudos

@dataVaughan  - you can use the Lakeview dashboard which can provide an URL that is shareable outside of the Databricks workspace.  https://www.databricks.com/blog/announcing-public-preview-lakeview-dashboards In your current scenario, you can clone ...

  • 0 kudos
2 More Replies
sai_sathya
by New Contributor III
  • 1691 Views
  • 1 replies
  • 0 kudos

Resolved! fetching metadata for tables in a database stored in unity catalogue

Hi everyoneiam trying to fetch the metadata of every columns from an table and every tables from the database under an catalogue for that iam trying to use the samples catalogue that provided by databricks and get details for tpch database that provi...

sai_sathya_0-1713277488517.png
  • 1691 Views
  • 1 replies
  • 0 kudos
Latest Reply
shan_chandra
Databricks Employee
  • 0 kudos

@sai_sathya  - you can use DESCRIBE EXTENDED command to get the metadata of the given table. Also, you can query the information_schema.columns within your UC catalog to check the column details of a given table.

  • 0 kudos
UdayPatel
by New Contributor III
  • 4043 Views
  • 5 replies
  • 1 kudos

Resolved! Can't run .py file using workflows anymore

Dear all,Greetings!I have been trying to run a workflow job which runs successfully when a task is created using a Notebook file from a folder present in the Workspace but when the same task's type is changed to python script and a .py file is select...

  • 4043 Views
  • 5 replies
  • 1 kudos
Latest Reply
UdayPatel
New Contributor III
  • 1 kudos

Hi,Have found the solution. It was due to following option being enabled under the Feature Enablement tab under  Databricks_Account_Console -- > Settings.  Thank you for all your help and the try!Regards,Uday

  • 1 kudos
4 More Replies
Mustafa_Kamal
by New Contributor II
  • 1475 Views
  • 4 replies
  • 0 kudos

Parameterizing DLT Pipelines

Hi Everyone,I have DLTP pipeline which I need to execute for difference source systems. Need advise on how to parametrize this.I have gone through many articles on the web, but it seems there is no accurate information available.Can anyone please hel...

  • 1475 Views
  • 4 replies
  • 0 kudos
Latest Reply
Mustafa_Kamal
New Contributor II
  • 0 kudos

Thank you @AmanSehgal ,I have done that and was able to execute the pipeline successfully. Bu t I need to change the parameter value at run time, so that the same pipeline can be used for multiple sources.Can we pass parameters from Job to DLT Pipeli...

  • 0 kudos
3 More Replies
databrciks
by New Contributor II
  • 1695 Views
  • 2 replies
  • 0 kudos

Resolved! Databrciks: failure logs

Hello Team,I am new to Databrciks. Generally where all the logs will be stored in Databricks. I see if any job fails below the command i could see some error messages.Otherwise in real time how to check the log files/error messages in Databricks UI.T...

  • 1695 Views
  • 2 replies
  • 0 kudos
Latest Reply
databrciks
New Contributor II
  • 0 kudos

Thanks for the response. This helped.

  • 0 kudos
1 More Replies
NhanNguyen
by Contributor III
  • 2043 Views
  • 2 replies
  • 1 kudos

Resolved! Cannot create delta location with mount path

Hi all,I'm trying to create a Table but cannot use a predifined mount path like '/mnt/silver/' but if i use a full path of azure blob container it will create susscessfully like this:`CREATE TABLE IF NOT EXISTS nhan_databricks.f1_processed.circuits (...

  • 2043 Views
  • 2 replies
  • 1 kudos
Latest Reply
NhanNguyen
Contributor III
  • 1 kudos

Oh thanks for you answer, actually I'm using Unity Catalog

  • 1 kudos
1 More Replies
Hetnon
by New Contributor II
  • 1122 Views
  • 1 replies
  • 0 kudos

Can't Create a Workspace using Google Cloud

Trying to create my first workspace. I hit create my space and I see 3 buckets being created on my GCP, but nothing shows up in the actual 'workspaces' in my databricks console. the only thing is the  'create workspace' button' also, there is no erro...

  • 1122 Views
  • 1 replies
  • 0 kudos
Latest Reply
" src="" />
This widget could not be displayed.
This widget could not be displayed.
This widget could not be displayed.
  • 0 kudos

This widget could not be displayed.
Trying to create my first workspace. I hit create my space and I see 3 buckets being created on my GCP, but nothing shows up in the actual 'workspaces' in my databricks console. the only thing is the  'create workspace' button' also, there is no erro...

This widget could not be displayed.
  • 0 kudos
This widget could not be displayed.
anonymous_567
by New Contributor II
  • 938 Views
  • 1 replies
  • 0 kudos

Ingesting Non-Incremental Data into Delta

Hello,I have non-incremental data landing in a storage account. This data contains old data from before as well as new data. I would like to avoid doing a complete table deletion and table creation just to upload the data from storage and have an upd...

  • 938 Views
  • 1 replies
  • 0 kudos
Latest Reply
AmanSehgal
Honored Contributor III
  • 0 kudos

Well, if you know the conditions to separate new data from old data, then while reading the data in to your dataframe, use filter or where clause to select new data and ingest it in to your delta table.This is how you can do in general. But if you ha...

  • 0 kudos
Olfa_Kamli
by New Contributor II
  • 935 Views
  • 1 replies
  • 0 kudos

log delivery are not creating data in s3 bucket

Hiii, Does anyone have an idea about the typical duration for Databricks to create logs in an S3 bucket using the databricks_mws_log_delivery Terraform resource? I've implemented the code provided in the Databricks official documentation, but I've be...

  • 935 Views
  • 1 replies
  • 0 kudos
Latest Reply
Olfa_Kamli
New Contributor II
  • 0 kudos

The issue has been resolved. There was no problem with the code or the API. However, it took over 12 hours for logs to start appearing in my bucket, despite Databricks documentation indicating that logs should appear within 1 hour..Thank you!

  • 0 kudos
TheIceBrick
by New Contributor III
  • 5488 Views
  • 3 replies
  • 1 kudos

Is there a (request-) size limit for the Databricks Rest Api Sql statements?

When inserting rows through the Sql Api (/api/2.0/sql/statements/), when more than a certain number of records (about 25 records with 8 small columns) are included in the statement, the call fails with the error:"The request could not be processed by...

Community Platform Discussions
REST API
Sql Statements
  • 5488 Views
  • 3 replies
  • 1 kudos
Latest Reply
ChrisCkx
New Contributor II
  • 1 kudos

@TheIceBrick did you find out anything else about this?I am experiencing exactly the same, I can insert up to 35 rows but break at about 50 rows.The payload size is 42KB, I am passing parameters for each row.@Debayan This is no where near the 16MiB /...

  • 1 kudos
2 More Replies
jenshumrich
by Contributor
  • 2031 Views
  • 2 replies
  • 0 kudos

Long running jobs get lost

Hello,I tried to schedule a long running job and surprisingly it does seem to neither terminate (and thus does not let the cluster shut down), nor continue running, even though the state is still "Running":But the truth is that the job has miserably ...

jenshumrich_0-1712742957610.png jenshumrich_2-1712743008070.png jenshumrich_3-1712743098546.png
  • 2031 Views
  • 2 replies
  • 0 kudos
Latest Reply
Lakshay
Databricks Employee
  • 0 kudos

Have you looked at the sql plan to see what the  spark job 72 was doing?

  • 0 kudos
1 More Replies
chari
by Contributor
  • 1736 Views
  • 2 replies
  • 0 kudos

Reading csv file with spark throws [insufficient privelage] error

Hello Community,I have some csv files saved in databricks workspace and want to read them with spark. I make use of the commanddf = spark.read.format('csv').load(r'filepath') However, it throws the error.org.apache.spark.SparkSecurityException: [INSU...

  • 1736 Views
  • 2 replies
  • 0 kudos
Latest Reply
Lakshay
Databricks Employee
  • 0 kudos

If this a UC enabled workspace, you need to provide the right access.

  • 0 kudos
1 More Replies
Ajay-Pandey
by Esteemed Contributor III
  • 2811 Views
  • 3 replies
  • 2 kudos

Resolved! Update regarding Community Reward Store

Hi Team,Is there any update on the Community Reward Store, as it's been discontinued from the old portal, and we still can't see the new portal for that.Is there any expected date when this will be available for community members?

  • 2811 Views
  • 3 replies
  • 2 kudos
Latest Reply
Ajay-Pandey
Esteemed Contributor III
  • 2 kudos

Thanks for update.

  • 2 kudos
2 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Top Kudoed Authors