cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

aw1
by New Contributor
  • 271 Views
  • 1 replies
  • 0 kudos

Steamlit in Databricks

HiI have developed a streamlit app locally on my desktop using dummy data, and now I want to be able to use actual data stored in azure blog storage. I have tried to run the same code within a notebook, but keep on getting dependency errors. Is there...

  • 271 Views
  • 1 replies
  • 0 kudos
Latest Reply
Advika
Databricks Employee
  • 0 kudos

Hello @aw1! What exact dependency errors or permission failures are you getting? Can you please share the error message?

  • 0 kudos
OrbitAnalytics
by New Contributor
  • 324 Views
  • 0 replies
  • 0 kudos

Exclusive Masterclass: Oracle Fusion + Databricks Integration using Orbit Analytics

Transform Your Oracle Fusion and Databricks Data into AI-Powered Business Intelligence with Orbit AnalyticsExclusive Masterclass: Oracle Fusion + Databricks Integration for more information click hereWhy This Matters to YouAre you struggling to unloc...

  • 324 Views
  • 0 replies
  • 0 kudos
abhirupa7
by New Contributor
  • 389 Views
  • 2 replies
  • 0 kudos

databricks dashboard deployment (schema and catalog modification)

I have a databricks dashboard. I have deployed the lvdash.json file through yml (resource.json) from dev to qa env.Now I can see my dashboard published version in resources folder.I want to change the catalog and schema of those underlying queries I ...

  • 389 Views
  • 2 replies
  • 0 kudos
Latest Reply
alexajames
New Contributor III
  • 0 kudos

You can try using DAB to promote the dashboard and parameterize the query. For more details, check out the DAB dashboard documentation.

  • 0 kudos
1 More Replies
UddP
by New Contributor III
  • 21428 Views
  • 34 replies
  • 1 kudos

Resolved! My Databrick exam got suspended just for coming closer to laptop screen to read question and options

Hi team,My Databricks Certified Data Engineer Associate exam got suspended within 10 minutes.I had also shown my exam room to the proctor. My exam got suspended due to eye movement. I was not moving my eyes away from laptop screen. It's hard to focus...

  • 21428 Views
  • 34 replies
  • 1 kudos
Latest Reply
Kavya_AD
New Contributor II
  • 1 kudos

@Cert-TeamOPS I am writing to raise a concern regarding an interruption that occurred during my Databricks Certified Data Engineer Associate exam scheduled for today at 1:15 PM. I began the exam at 1:00 PM, and the experience was smooth until I recei...

  • 1 kudos
33 More Replies
Alex79
by New Contributor II
  • 745 Views
  • 7 replies
  • 5 kudos

Resolved! How to create classes that can be instantiated from other notebooks?

Hi,I am familiar with object oriented programming and cannot really get my head around the philosophy of coding in Databricks. My approach that naturally consists in creating classes and instantiating objects does not seem to be the right one.Can som...

  • 745 Views
  • 7 replies
  • 5 kudos
Latest Reply
BS_THE_ANALYST
Esteemed Contributor III
  • 5 kudos

Legendary, @szymon_dybczak  All the best,BS

  • 5 kudos
6 More Replies
itamarwe
by New Contributor II
  • 2001 Views
  • 3 replies
  • 1 kudos

Google PubSub for DLT - Error

I'm trying to create a delta live table from a Google PubSub stream.Unfortunately I'm getting the following error:org.apache.spark.sql.streaming.StreamingQueryException: [PS_FETCH_RETRY_EXCEPTION] Task in pubsub fetch stage cannot be retried. Partiti...

  • 2001 Views
  • 3 replies
  • 1 kudos
Latest Reply
sahilsagar302
New Contributor II
  • 1 kudos

@itamarwe can you please share which permission resulted into the issue and how it got resolved

  • 1 kudos
2 More Replies
Srajole
by New Contributor
  • 1789 Views
  • 2 replies
  • 2 kudos

Data load issue

I have a job in Databricks which completed successfully but the data is not been written into the target table, I have checked all the possible ways, each n every thing is correct in the code, target table name, source table name, etc etc. It is a Fu...

  • 1789 Views
  • 2 replies
  • 2 kudos
Latest Reply
cgrant
Databricks Employee
  • 2 kudos

This looks like a misconfigured Query Watchdog, specifically the below config: spark.conf.get("spark.databricks.queryWatchdog.outputRatioThreshold") Please check the value of this config - it is 1000 by default. Also, we recommend using Jobs Comput...

  • 2 kudos
1 More Replies
jano
by New Contributor III
  • 421 Views
  • 1 replies
  • 1 kudos

Delta UniForm

When we save a delta table using the UniForm option we are seeing a 50% drop in table size. When we add UniForm to a delta table in post we are seeing no change in data size. Is this expected or are others seeing this as well? 

Get Started Discussions
Data Size
delta
UniForm
  • 421 Views
  • 1 replies
  • 1 kudos
Latest Reply
Louis_Frolio
Databricks Employee
  • 1 kudos

Re:When we save a delta table using the UniForm option we are seeing a 50% drop in table size What format are you starting with?  e.g. csv -> Delta.   

  • 1 kudos
ChristianRRL
by Valued Contributor III
  • 618 Views
  • 1 replies
  • 2 kudos

Resolved! AutoLoader Pros/Cons When Extracting Data (Cross-Post)

Cross-posting from: https://community.databricks.com/t5/data-engineering/autoloader-pros-cons-when-extracting-data/td-p/127400Hi there, I am interested in using AutoLoader, but I'd like to get a bit of clarity if it makes sense in my case. Based on e...

  • 618 Views
  • 1 replies
  • 2 kudos
Latest Reply
BS_THE_ANALYST
Esteemed Contributor III
  • 2 kudos

You’ve already identified data duplication as a potential con of landing the data first, but there are several benefits to this approach that might not be immediately obvious:Schema Inference and Evolution: AutoLoader can automatically infer the sche...

  • 2 kudos
FedeRaimondi
by Contributor II
  • 662 Views
  • 3 replies
  • 2 kudos

Resolved! Python module import with Dedicated access mode

I currently have a repo connected in databricks and I was able to correctly import a python module from src folder located in the same root.Since I am using a Machine Learning runtime, I am force to choose a Dedicated (formerly: Single user) access m...

  • 662 Views
  • 3 replies
  • 2 kudos
Latest Reply
FedeRaimondi
Contributor II
  • 2 kudos

Thanks @szymon_dybczak ! I confirm that's a permission issue and assigning "CAN MANAGE" solves it.I still find it not really intuitive, since the goal is to use a shared cluster (with ML runtime) for development purposes. I mean, it would make sense ...

  • 2 kudos
2 More Replies
Boban12335
by New Contributor
  • 286 Views
  • 1 replies
  • 0 kudos

Unity Catalog tool function with custom parameters not being used

I have created a UC tool that takes in a few custom STRING parameters. I gave this tool to an ai agent using the mosaic ai agent framework with hardcoded parameter values for testing. The issue is my ai agent hallucinates and injects its own ai gener...

  • 286 Views
  • 1 replies
  • 0 kudos
Latest Reply
Nivethan_Venkat
Contributor III
  • 0 kudos

Hi @Boban12335,Can we get UC function definition to understand your problem better?Best Regards,Nivethan V

  • 0 kudos
ChristianRRL
by Valued Contributor III
  • 530 Views
  • 3 replies
  • 3 kudos

Resolved! AutoLoader - Write To Console (Notebook Cell) Long Running Issue

Hi there,I am likely misunderstanding how to use AutoLoader properly while developing/testing. I am trying to write a simple AutoLoader notebook cell to *read* the contents of a path with json files, and *write* them to console (i.e. notebook cell) i...

ChristianRRL_0-1754403001614.png
  • 530 Views
  • 3 replies
  • 3 kudos
Latest Reply
SP_6721
Honored Contributor
  • 3 kudos

Hi @ChristianRRL ,It looks like spark.readStream with Auto Loader creates a continuous streaming job by default, which means it keeps running while waiting for new files.To avoid this, you can control the behaviour using trigger(availableNow=True), w...

  • 3 kudos
2 More Replies
Lucas_N
by New Contributor II
  • 3039 Views
  • 2 replies
  • 3 kudos

Resolved! Documentation for spatial SQL public preview - Where is it?

Hi everybody,since DBR 17.1 spatial sql functions (st_point(), st_distancesphere, ... ) are in public preview.The functionality is presented in this talk Geospatial Insights With Databricks SQL: Techniques and Applications or discussed here in the fo...

  • 3039 Views
  • 2 replies
  • 3 kudos
Latest Reply
Geospatial_Gwen
New Contributor III
  • 3 kudos

Is this what you were after?https://docs.databricks.com/aws/en/sql/language-manual/sql-ref-st-geospatial-functions

  • 3 kudos
1 More Replies
Danish1105
by New Contributor II
  • 373 Views
  • 1 replies
  • 1 kudos

Resolved! Run_type has some null

Just wondering — we know that the run_type column in the job run timeline usually has only three values: JOB_RUN, SUBMIT_RUN, and WORKFLOW_RUN. So why do we also see a null value there? Any reason?  

Danish1105_0-1754303528409.jpeg
  • 373 Views
  • 1 replies
  • 1 kudos
Latest Reply
szymon_dybczak
Esteemed Contributor III
  • 1 kudos

Hi @Danish1105 ,One possible explanation is that you see null values because of the following reason they stated in documentation:"Not populated for rows emitted before late August 2024."In case of my workspace, this seems valid. I have only nulls wh...

  • 1 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels