cancel
Showing results for 
Search instead for 
Did you mean: 
Community Platform Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

ChristianRRL
by Valued Contributor
  • 458 Views
  • 2 replies
  • 1 kudos

Resolved! DBX Community Pending Answers

Hi there, in the past I've posted questions in this community and I would consistently get responses back in a very reasonable time frame. Typically I think most of my posts have an initial response back within 1-2 days, or just a few days (I don't t...

  • 458 Views
  • 2 replies
  • 1 kudos
Latest Reply
ChristianRRL
Valued Contributor
  • 1 kudos

Thank you for clarifying. I know some questions may be a bit more technical, but I hope I get some feedback/suggestions, particularly to my UMF Best Practice question!

  • 1 kudos
1 More Replies
keb
by New Contributor II
  • 464 Views
  • 1 replies
  • 0 kudos

Install bundle built artifact as notebook-scoped library

We are having a hard time finding an intuitive way of using the artifacts we build and deploy with databricks bundle deploy notebook-scoped.Desired result:Having internal artifacts be available notebook-scoped for jobs by configorHaving an easier way...

Community Platform Discussions
artifacts
asset bundles
DAB
  • 464 Views
  • 1 replies
  • 0 kudos
Latest Reply
keb
New Contributor II
  • 0 kudos

We were not able to find a clean solution for this, so what we ended up doing is referencing the common lib like this in every notebook it is needed.%pip install ../../../artifacts/.internal/common-0.1-py3-none-any.whl 

  • 0 kudos
Dejian
by New Contributor II
  • 275 Views
  • 2 replies
  • 0 kudos

Autoloader delete action on AWS S3

Hi folks, I have been using autoloader to ingest files from S3 bucket.I tried to add trigger on the workflows to schedule the job to run every 10 minutes. However, recently I'm facing an error that makes the jobs keep failing after a few success run....

  • 275 Views
  • 2 replies
  • 0 kudos
Latest Reply
saisaran_g
Contributor
  • 0 kudos

There might be few possibilities : can you check this items ? 1. Is there any s3 bucket policy configured like within timeframe file deletion or file validity configured ? 2. check the autload configuration once again to validate the option of cleanu...

  • 0 kudos
1 More Replies
sivaram_mandepu
by New Contributor
  • 497 Views
  • 1 replies
  • 0 kudos

Unable pass array of tables names from for each and send it task param

sending below array list from for each task["mv_t005u","mv_t005t","mv_t880"] In the task , iam reading value as key :mv_namevalue :{{input}} but in note book i am getting below errorNote book code:%sqlREFRESH MATERIALIZED VIEW nonprod_emea.silver_loc...

  • 497 Views
  • 1 replies
  • 0 kudos
Latest Reply
Renu_
New Contributor III
  • 0 kudos

Hi @sivaram_mandepu,In the first screenshot, the input must be a valid JSON array, so instead of using {{mvname: "mv_......"}}, update it to [ { "mvname": "mv_......." } ].In the third screenshot, the SQL error likely comes from a newline or extra sp...

  • 0 kudos
benno
by New Contributor II
  • 375 Views
  • 2 replies
  • 0 kudos

No views visible via foreign catalog

Hello,I have created a connection to a SQL Server. I have created a foreign catalog using this connection.When I show the catalog in the catalog explorer I can see the schemas and I can also see the tables and views in one schema.  In another schema,...

  • 375 Views
  • 2 replies
  • 0 kudos
Latest Reply
benno
New Contributor II
  • 0 kudos

@dipudot, yes the permission are OK. I can see them in SQL Serber Management using the same account.I have read somewhere that the some characters might not be supported. The views have all the pattern <tenant>$<table_name>.I will retest with a small...

  • 0 kudos
1 More Replies
jorperort
by Contributor
  • 788 Views
  • 3 replies
  • 0 kudos

Resolved! Init Scripts Error When Deploying a Delta Live Table Pipeline with Databricks Asset Bundles

Hello everyone,Let me give you some context. I am trying to deploy a Delta Live Table pipeline using Databricks Asset Bundles, which requires a private library hosted in Azure DevOps.As far as I understand, this can be resolved in three ways:Installi...

  • 788 Views
  • 3 replies
  • 0 kudos
Latest Reply
jorperort
Contributor
  • 0 kudos

I detected the error; it was due to the path defined in the bundle where the init script was located.I'm closing the post.

  • 0 kudos
2 More Replies
BigAlThePal
by New Contributor
  • 296 Views
  • 1 replies
  • 0 kudos

.py file running stuck on waiting

Hello, hope you are doing well.We are facing an issue when running .py files. This is fairly recent and we were not experiencing this issue last week.As shown in the screenshots below, the .py file hangs on "waiting" after we press "run all". No matt...

BigAlThePal_0-1743709475328.png
  • 296 Views
  • 1 replies
  • 0 kudos
Latest Reply
humpy_reddy
New Contributor II
  • 0 kudos

Hey @BigAlThePal, It looks like a UI bug, especially in Microsoft Edge. The code actually runs, but the output doesn't show until you refresh. A few quick things you can try:Run cells individually instead of using "Run All"Switch to Chrome or Firefox...

  • 0 kudos
diego_poggioli
by Contributor
  • 3131 Views
  • 2 replies
  • 0 kudos

FAILED_READ_FILE.NO_HINT error

We read data from csv in the volume into the table using COPY INTO. The first 200 files were added without problems, but now we are no longer able to add any new data to the table and the error is FAILED_READ_FILE.NO_HINT. The CSV format is always th...

  • 3131 Views
  • 2 replies
  • 0 kudos
Latest Reply
lurban
New Contributor II
  • 0 kudos

I came across the same issue and the file causing problems needed the csv option "multiline" set back to the default "false" to read the file:df = spark.read.option("multiline", "false").csv("CSV_PATH") If this approach eliminates the error above, I ...

  • 0 kudos
1 More Replies
Twilight
by New Contributor III
  • 334 Views
  • 2 replies
  • 1 kudos

webterm unminimize command missing?

A lot of commands in webterm basically tell you a bunch of stuff has been not installed or minimized and you should run `unminimize` for a full interactive experience.This used to work great.  However, I just tried it and the unminimize command is no...

  • 334 Views
  • 2 replies
  • 1 kudos
Latest Reply
Twilight
New Contributor III
  • 1 kudos

1. no such command exists2. probably not - we tend to dump old clusters and create new ones (for new sets of data) fairly frequently and (I think) use the latest stable DBR when creating3. I did find a workaround.  unminimize has been added to apt so...

  • 1 kudos
1 More Replies
mh177
by New Contributor II
  • 592 Views
  • 2 replies
  • 0 kudos

Resolved! Change Data Feed And Column Masks

Hi there,Wondering if anyone can help me. I have had a job set up to stream from one change data feed enabled delta table to another delta table and has been executing successfully. I then added column masks to the source table from which I am stream...

  • 592 Views
  • 2 replies
  • 0 kudos
Latest Reply
saisaran_g
Contributor
  • 0 kudos

Hello Mate,Hope doing great,you can configure a service principle in that case, add proper roles as per needs and use as run owner. Re_run the stream so that your PII will not be able to display to other teams/persons until having the member. Simple ...

  • 0 kudos
1 More Replies
suryahyd39
by New Contributor
  • 237 Views
  • 1 replies
  • 0 kudos

Can we get the branch name from Notebook

Hi Folks,Is there a way to display the current git branch name from Databricks notebook Thanks

  • 237 Views
  • 1 replies
  • 0 kudos
Latest Reply
BigRoux
Databricks Employee
  • 0 kudos

Yes, you can display the current git branch name from a Databricks notebook in several ways: Using the Databricks UI The simplest method is to use the Databricks UI, which already shows the current branch name:- In a notebook, look for the button nex...

  • 0 kudos
Gpu
by New Contributor
  • 190 Views
  • 1 replies
  • 0 kudos

How to get the hadoopConfiguration in a unity catalog standard access mode app ?

Context:job running using a job clustered configured in Standard access mode ( Shared Access mode )scala 2.12.15 / spark 3.5.0 jar programDatabricks runtime 15.4 LTSIn this context, it is not possible to get the sparkSession.sparkContext, as confirme...

  • 190 Views
  • 1 replies
  • 0 kudos
Latest Reply
BigRoux
Databricks Employee
  • 0 kudos

In Unity Catalog standard access mode (formerly shared access mode) with Databricks Runtime 15.4 LTS, direct access to `sparkSession.sparkContext` is restricted as part of the security limitations. However, there are still ways to access the Hadoop c...

  • 0 kudos