cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

ik8
by New Contributor
  • 341 Views
  • 1 replies
  • 1 kudos

Resolved! Accessing views using unitycatalog module

Hi, I'm trying to access to views in my catalog in databricks using unitycatalog open source modulewhen I try to do so I get an error message that indicates this is not possible:cannot be accessed from outside of Databricks Compute Environment due to...

  • 341 Views
  • 1 replies
  • 1 kudos
Latest Reply
BigRoux
Databricks Employee
  • 1 kudos

Here are helpful tips/tricks:   Based on the latest Databricks documentation and internal guides, it is currently not possible to grant external access (via open source Unity Catalog APIs or credential vending) to Unity Catalog views (i.e., objects w...

  • 1 kudos
JohnsonBDSouza
by New Contributor II
  • 6228 Views
  • 4 replies
  • 0 kudos

Unable to create Iceberg tables pointing to data in S3 and run queries against the tables.

I need to to set up Iceberg tables in Databricks environment, but the data resides in an S3 bucket. Then read these tables by running SQL queries.Databricks environment has access to S3. This is done bysetting up the access by mapping the Instance Pr...

JohnsonBDSouza_0-1705982713662.png JohnsonBDSouza_1-1705982713665.png JohnsonBDSouza_2-1705982713667.jpeg JohnsonBDSouza_3-1705982713676.png
  • 6228 Views
  • 4 replies
  • 0 kudos
Latest Reply
sridharplv
Valued Contributor II
  • 0 kudos

@JohnsonBDSouza , @PujithaKarnati , @Venkat5 ,There are 3 concepts to use Iceberg format in databricks based on recent updates in DAIS 2025.1) Managed Iceberg tables 2) Foreign Iceberg tables 3) Enabling Iceberg reads on delta tables. Please refer be...

  • 0 kudos
3 More Replies
Siebert_Looije
by Contributor
  • 2942 Views
  • 1 replies
  • 0 kudos

How to search on empty string on text filter with Lakeview Dashboards

Hi,I have created a lakeview dashboard with a couple of filters and a table. Now I would like to search if a certain filter (column) has an empty string but if I search for ' ' then it goes 'no data'. I am wondering how can I search for an empty stri...

Siebert_Looije_0-1698145083422.png Siebert_Looije_1-1698145103738.png
  • 2942 Views
  • 1 replies
  • 0 kudos
Latest Reply
melina-belloti
New Contributor II
  • 0 kudos

Hi! Could you get a resolution for this problem ? 

  • 0 kudos
maartenvr
by New Contributor III
  • 17552 Views
  • 6 replies
  • 1 kudos

Installed Library / Module not found through Databricks connect LST 12.2

Hi all,We recently upgraded our databricks compute cluster from runtime version 10.4 LST, to 12.2 LST.After the upgrade one of our python scripts suddenly fails with a module not found error; indicating that our customly created module "xml_parser" i...

  • 17552 Views
  • 6 replies
  • 1 kudos
Latest Reply
jguski
New Contributor II
  • 1 kudos

Hi @maartenvr , hi @Debayan ,Are there any updates on this? Have you found a solution, or can the problem at least be narrowed down to specific DBR versions? I am on a cluster with 11.3 LTS and deploy my custom packaged code (named simply 'src') as P...

  • 1 kudos
5 More Replies
lance-gliser
by New Contributor
  • 3020 Views
  • 6 replies
  • 0 kudos

Databricks apps - Volumes and Workspace - FileNotFound issues

I have a Databricks App I need to integrate with volumes using local python os functions. I've setup a simple test:  def __init__(self, config: ObjectStoreConfig): self.config = config # Ensure our required paths are created ...

  • 3020 Views
  • 6 replies
  • 0 kudos
Latest Reply
cliff_ng14
New Contributor II
  • 0 kudos

 I am facing this issue too, I have added the volume under the app resource as a UC volume with read and write permissions, but pd.read_csv() is unable to find the file path. Please let me know what I can do

  • 0 kudos
5 More Replies
boitumelodikoko
by Contributor III
  • 463 Views
  • 0 replies
  • 2 kudos

Data Engineering Lessons

Getting into the data space can feel overwhelming, with so many tools, terms, and technologies. But after years inExpect failure. Design for it.Jobs will fail. The data will be late. Build systems that can recover gracefully, and continually monitor ...

  • 463 Views
  • 0 replies
  • 2 kudos
Prasad_Koneru
by New Contributor III
  • 2257 Views
  • 3 replies
  • 0 kudos

Databricks grant update calatog catlog_name --json @privileges.json not updating privileges

Hi Team, I am trying to update the catalog permission privileges using databricks cli command Grant by appending json file but which is not updating the prIviliges, please help on grant update command usage.Command using :  databricks grants update c...

  • 2257 Views
  • 3 replies
  • 0 kudos
Latest Reply
Pat
Esteemed Contributor
  • 0 kudos

If someone needs this in the future, like I did.The issue is with your JSON structure. The Databricks CLI uses "changes" with "add" instead of "privilege_assignments" with "privileges".{ "changes": [ { "principal": "mailid", "add": ...

  • 0 kudos
2 More Replies
chris0991
by New Contributor III
  • 1607 Views
  • 2 replies
  • 1 kudos

Best practices for optimizing Spark jobs

What are some best practices for optimizing Spark jobs in Databricks, especially when dealing large datasets? Any tips or resources would be greatly appreciated! I’m trying to analyze data on restaurant menu prices so that insights would be especiall...

  • 1607 Views
  • 2 replies
  • 1 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 1 kudos

There are so many.Here are a few:- look for data skew- shuffle as less as possible- avoid many small files- use spark and not only pure python- if using an autoscale cluster: check if you don't lose a lot of time scaling up/down

  • 1 kudos
1 More Replies
stucas
by New Contributor
  • 553 Views
  • 1 replies
  • 0 kudos

Logging: Unable to read a /volume based file

Hi We've just started using databricks and so am a little naive into the file system, especially regarding unity catalog.The issue is that we're creating a loggeer and wanting to write the files based on a queue handler/listener pattern. The patternn...

  • 553 Views
  • 1 replies
  • 0 kudos
Latest Reply
FedeRaimondi
Contributor
  • 0 kudos

When using the CLI you need to add the scheme:dbfs:/Volumes/​...The rest should be fine to refer with "/Volumes/...", for more info Manage files in volumes | Databricks Documentation.Hope this solves the issue!  

  • 0 kudos
esistfred
by New Contributor III
  • 2124 Views
  • 3 replies
  • 3 kudos

Resolved! How to use variable-overrides.json for environment-specific configuration in Asset Bundles?

Hi all,Could someone clarify the intended usage of the variable-overrides.json file in Databricks Asset Bundles?Let me give some context. Let's say my repository layout looks like this:databricks/ ├── notebooks/ │ └── notebook.ipynb ├── resources/ ...

  • 2124 Views
  • 3 replies
  • 3 kudos
Latest Reply
esistfred
New Contributor III
  • 3 kudos

It does. Thanks for the reponse. I also continued playing around with it and found a way using the variable-overrides.json file. I'll leave it here just in case anyone is interested:Repository layout:databricks/ ├── notebooks/ │ └── notebook.ipynb ...

  • 3 kudos
2 More Replies
Phani1
by Valued Contributor II
  • 898 Views
  • 1 replies
  • 0 kudos

Resolved! Workspace Consolidation Strategy in Databricks

Hi Team,The customer is facing a challenge related to increasing Databricks workspace maintenance costs. Apparently, every project is creating its own workspace for specific functionalities, and this has become a standard practice. As a result, the n...

  • 898 Views
  • 1 replies
  • 0 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 0 kudos

This is something that you should discuss with your Databricks rep imo.  Even with standard tools, migrating consolidating 200 workspaces is something that needs very careful planning and testing.

  • 0 kudos
sastopy
by New Contributor II
  • 459 Views
  • 0 replies
  • 0 kudos

SAS TO DATABRICKS MIGRATION

SAS to PY is an AI/ML-based Accelerator designed for "SAS to Python or PySpark" code migration. This Accelerator is engineered to convert SAS legacy proprietary codes to the more flexible, open-source Python or PySpark environment with 95% automatica...

  • 459 Views
  • 0 replies
  • 0 kudos
darioschiraldi9
by New Contributor II
  • 504 Views
  • 1 replies
  • 0 kudos

Dario Schiraldi : How do I integrate Databricks with AWS?

Hi everyone,I am Dario Schiraldi, CEO of Travel Works, and I am reaching out to the community for some insights. We are in the process of integrating Databricks with AWS for a new project, and I have love to hear from anyone who has experience with t...

  • 504 Views
  • 1 replies
  • 0 kudos
Latest Reply
Khaja_Zaffer
Contributor
  • 0 kudos

Hello Dario Good to meet you. You can connect with your account manager of databricks. Also Azure provides first partner assistance to databricks. you can check Azure services as well. Thank you. 

  • 0 kudos
Alexandru
by New Contributor III
  • 4310 Views
  • 4 replies
  • 0 kudos

Resolved! vscode python project for development

Hi,I'm trying to set up a local development environment using python / vscode / poetry. Also, linting is enabled (Microsoft pylance extension) and the python.analysis.typeCheckingMode is set to strict.We are using python files for our code (.py) whit...

  • 4310 Views
  • 4 replies
  • 0 kudos
Latest Reply
A_N
New Contributor II
  • 0 kudos

How did you solve the type error checks on `pyspark.sql `  ? mypy doesn't create the missing stubs for that one? 

  • 0 kudos
3 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels