cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

AnkurMulasi
by New Contributor
  • 1100 Views
  • 0 replies
  • 0 kudos

Time Series Book by a Senior Solutions Architect

I recently worked with one of the Senior Solutions Architects, Yoni Ramaswami (https://www.linkedin.com/in/yoni-r/) with Databricks on a new book" Time Series Analysis with Spark" Key FeaturesQuickly get started with your first models and explore the...

  • 1100 Views
  • 0 replies
  • 0 kudos
sivaram_mandepu
by New Contributor
  • 1615 Views
  • 1 replies
  • 0 kudos

Unable pass array of tables names from for each and send it task param

sending below array list from for each task["mv_t005u","mv_t005t","mv_t880"] In the task , iam reading value as key :mv_namevalue :{{input}} but in note book i am getting below errorNote book code:%sqlREFRESH MATERIALIZED VIEW nonprod_emea.silver_loc...

  • 1615 Views
  • 1 replies
  • 0 kudos
Latest Reply
Renu_
Valued Contributor II
  • 0 kudos

Hi @sivaram_mandepu,In the first screenshot, the input must be a valid JSON array, so instead of using {{mvname: "mv_......"}}, update it to [ { "mvname": "mv_......." } ].In the third screenshot, the SQL error likely comes from a newline or extra sp...

  • 0 kudos
benno
by New Contributor II
  • 1347 Views
  • 2 replies
  • 0 kudos

No views visible via foreign catalog

Hello,I have created a connection to a SQL Server. I have created a foreign catalog using this connection.When I show the catalog in the catalog explorer I can see the schemas and I can also see the tables and views in one schema.  In another schema,...

  • 1347 Views
  • 2 replies
  • 0 kudos
Latest Reply
benno
New Contributor II
  • 0 kudos

@dipudot, yes the permission are OK. I can see them in SQL Serber Management using the same account.I have read somewhere that the some characters might not be supported. The views have all the pattern <tenant>$<table_name>.I will retest with a small...

  • 0 kudos
1 More Replies
jorperort
by Contributor
  • 2786 Views
  • 3 replies
  • 0 kudos

Resolved! Init Scripts Error When Deploying a Delta Live Table Pipeline with Databricks Asset Bundles

Hello everyone,Let me give you some context. I am trying to deploy a Delta Live Table pipeline using Databricks Asset Bundles, which requires a private library hosted in Azure DevOps.As far as I understand, this can be resolved in three ways:Installi...

  • 2786 Views
  • 3 replies
  • 0 kudos
Latest Reply
jorperort
Contributor
  • 0 kudos

I detected the error; it was due to the path defined in the bundle where the init script was located.I'm closing the post.

  • 0 kudos
2 More Replies
diego_poggioli
by Contributor
  • 8744 Views
  • 2 replies
  • 0 kudos

FAILED_READ_FILE.NO_HINT error

We read data from csv in the volume into the table using COPY INTO. The first 200 files were added without problems, but now we are no longer able to add any new data to the table and the error is FAILED_READ_FILE.NO_HINT. The CSV format is always th...

  • 8744 Views
  • 2 replies
  • 0 kudos
Latest Reply
lurban
Databricks Partner
  • 0 kudos

I came across the same issue and the file causing problems needed the csv option "multiline" set back to the default "false" to read the file:df = spark.read.option("multiline", "false").csv("CSV_PATH") If this approach eliminates the error above, I ...

  • 0 kudos
1 More Replies
Twilight
by Contributor
  • 1255 Views
  • 2 replies
  • 1 kudos

webterm unminimize command missing?

A lot of commands in webterm basically tell you a bunch of stuff has been not installed or minimized and you should run `unminimize` for a full interactive experience.This used to work great.  However, I just tried it and the unminimize command is no...

  • 1255 Views
  • 2 replies
  • 1 kudos
Latest Reply
Twilight
Contributor
  • 1 kudos

1. no such command exists2. probably not - we tend to dump old clusters and create new ones (for new sets of data) fairly frequently and (I think) use the latest stable DBR when creating3. I did find a workaround.  unminimize has been added to apt so...

  • 1 kudos
1 More Replies
mh177
by New Contributor II
  • 1799 Views
  • 2 replies
  • 0 kudos

Resolved! Change Data Feed And Column Masks

Hi there,Wondering if anyone can help me. I have had a job set up to stream from one change data feed enabled delta table to another delta table and has been executing successfully. I then added column masks to the source table from which I am stream...

  • 1799 Views
  • 2 replies
  • 0 kudos
Latest Reply
saisaran_g
Contributor
  • 0 kudos

Hello Mate,Hope doing great,you can configure a service principle in that case, add proper roles as per needs and use as run owner. Re_run the stream so that your PII will not be able to display to other teams/persons until having the member. Simple ...

  • 0 kudos
1 More Replies
eimis_pacheco
by Contributor
  • 2224 Views
  • 2 replies
  • 1 kudos

Resolved! Databricks AI + Data Summit discount coupon

Hi Community,I hope you're doing well.I wanted to ask you the following: I want to go to Databricks AI + Data Summit this year, but it's super expensive for me. And hotels in San Francisco, as you know, are super expensive.So, I wanted to know how I ...

  • 2224 Views
  • 2 replies
  • 1 kudos
Latest Reply
eimis_pacheco
Contributor
  • 1 kudos

Thank you for your answer. Thanks

  • 1 kudos
1 More Replies
suryahyd39
by New Contributor
  • 1669 Views
  • 1 replies
  • 0 kudos

Can we get the branch name from Notebook

Hi Folks,Is there a way to display the current git branch name from Databricks notebook Thanks

  • 1669 Views
  • 1 replies
  • 0 kudos
Latest Reply
Louis_Frolio
Databricks Employee
  • 0 kudos

Yes, you can display the current git branch name from a Databricks notebook in several ways: Using the Databricks UI The simplest method is to use the Databricks UI, which already shows the current branch name:- In a notebook, look for the button nex...

  • 0 kudos
Anuradha_Mel
by Databricks Partner
  • 1022 Views
  • 1 replies
  • 0 kudos

DLT Pipeline

Hello, I have written below simple code to write data to Catalogue table using simple DLT pipeline .As part of Below program am reading a file from blob container and trying to write to a Catalogue table . New catalogue table got created but table  d...

  • 1022 Views
  • 1 replies
  • 0 kudos
Latest Reply
Louis_Frolio
Databricks Employee
  • 0 kudos

The issue with your DLT pipeline is that you've defined the table and schema correctly, but you haven't actually implemented the data loading logic in your `ingest_from_storage()` function. While you've created the function, you're not calling it any...

  • 0 kudos
Gpu
by New Contributor
  • 1203 Views
  • 1 replies
  • 0 kudos

How to get the hadoopConfiguration in a unity catalog standard access mode app ?

Context:job running using a job clustered configured in Standard access mode ( Shared Access mode )scala 2.12.15 / spark 3.5.0 jar programDatabricks runtime 15.4 LTSIn this context, it is not possible to get the sparkSession.sparkContext, as confirme...

Get Started Discussions
Scala
Unity Catalog
  • 1203 Views
  • 1 replies
  • 0 kudos
Latest Reply
Louis_Frolio
Databricks Employee
  • 0 kudos

In Unity Catalog standard access mode (formerly shared access mode) with Databricks Runtime 15.4 LTS, direct access to `sparkSession.sparkContext` is restricted as part of the security limitations. However, there are still ways to access the Hadoop c...

  • 0 kudos
pg289
by New Contributor II
  • 6512 Views
  • 1 replies
  • 0 kudos

How to connect to an on-premise implementation of S3 storage (such as Minio) in Databricks Notebooks

I manage a large data lake of Iceberg tables stored on premise in S3 storage from MinIO. I need a Spark cluster to run ETL jobs. I decided to try Databricks as there were no other good options. However, I'm unable to properly access my tables or even...

  • 6512 Views
  • 1 replies
  • 0 kudos
Latest Reply
SP_6721
Honored Contributor II
  • 0 kudos

Not sure, but Databricks may default to AWS-style paths if the configurations are incomplete. Try setting the MinIO endpoint by configuring spark.hadoop.fs.s3a.endpoint to your MinIO server's URL. If MinIO uses HTTP, disable SSL by setting spark.hado...

  • 0 kudos
Malthe
by Valued Contributor II
  • 4394 Views
  • 2 replies
  • 0 kudos

Create DLT pipeline in CI/CD with role segregation

In the documentation, most examples use the CREATE OR REFRESH STREAMING TABLE command.Meanwhile, from a role segregation perspective, create and refresh operations should happen in a separate context. That is, we want to create these objects (which e...

  • 4394 Views
  • 2 replies
  • 0 kudos
Latest Reply
Renu_
Valued Contributor II
  • 0 kudos

Hi @Malthe, refreshing is automatically handled during pipeline runs in here. To implement effective role segregation, you should define separate DLT pipelines for deployment and execution, each with its own set of roles and permissions. This approac...

  • 0 kudos
1 More Replies
Krthk
by New Contributor
  • 1461 Views
  • 1 replies
  • 1 kudos

Resolved! Jobs overhead why ?

Hi, I have a py notebook that I want to execute in an automated manner. One way I found this was to attach this to a job/task and hit it using the api from my local. However this seems to be adding significant overhead, my code even if it’s just one ...

Get Started Discussions
API
automation
jobs
Jobs api spark
spark
  • 1461 Views
  • 1 replies
  • 1 kudos
Latest Reply
Isi
Honored Contributor III
  • 1 kudos

Hey @Krthk If you want to orchestrate a notebook, the easiest way is to go to File > Schedule directly from the notebook. My recommendation is to use cron syntax to define when it should run, and attach it to a predefined cluster or configure a new j...

  • 1 kudos
phguk
by New Contributor III
  • 48275 Views
  • 5 replies
  • 3 kudos

Using Azure Key Vault secret to access Azure Storage

I am trying to configure access to Azure Storage Account (ADLS2) using OAUTH.  The doc here gives an example of how to specify a secret in a cluster's spark configuration{{secrets/<secret-scope>/<service-credential-key>}}I can see how this works for ...

  • 48275 Views
  • 5 replies
  • 3 kudos
Latest Reply
bot_axel
New Contributor II
  • 3 kudos

New doc link : https://learn.microsoft.com/en-us/azure/databricks/security/secrets/

  • 3 kudos
4 More Replies
Labels