cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Azsdc
by New Contributor
  • 2783 Views
  • 0 replies
  • 0 kudos

Usage of if else condition for data check

Hi,In a particular Workflows Job, I am trying to add some data checks in between each task by using If else statement. I used following statement in a notebook to call parameter in if else condition to check logic.{"job_id": XXXXX,"notebook_params": ...

  • 2783 Views
  • 0 replies
  • 0 kudos
Ha2001
by New Contributor
  • 2504 Views
  • 1 replies
  • 1 kudos

Databricks Repos API Limitations

Hi, I have started using databricks recently, and I'm not able find a right solution in the documentations. i have linked multiple repos in my databricks workspace in the repos folders, and I wanted to update the repos with remote AzureDevops reposit...

Get Started Discussions
azure devops
Databricks
REST API
  • 2504 Views
  • 1 replies
  • 1 kudos
Latest Reply
Ayushi_Suthar
Databricks Employee
  • 1 kudos

Hi @Ha2001 , Good Day!  Databricks API has a limit of 10 per second for the /repos/* combined requests in the workspace. You can check the below documentation for the API limit:  https://docs.databricks.com/en/resources/limits.html#:~:text=Git%20fold...

  • 1 kudos
jcozar
by Contributor
  • 4608 Views
  • 4 replies
  • 1 kudos

Resolved! Spark streaming query stops after code exception in notebook since 14.3

Hi!I am experiencing something that I cannot find in the documentation: in databricks, using the databricks runtime 13.X, when I start a streaming query (using .start method), it creates a new query and while it is running I can execute other code in...

  • 4608 Views
  • 4 replies
  • 1 kudos
Latest Reply
Lakshay
Databricks Employee
  • 1 kudos

You can use the help portal: https://help.databricks.com/s/

  • 1 kudos
3 More Replies
GlennStrycker2
by New Contributor III
  • 1573 Views
  • 1 replies
  • 1 kudos

Why so many different domains and accounts?

I've lost count of how many different domains and accounts Databricks is requiring for me to use their services.  Every domain is requiring its own account username, password, etc., and nothing is synced.  I can't even keep track of which email addre...

  • 1573 Views
  • 1 replies
  • 1 kudos
Latest Reply
GlennStrycker2
New Contributor III
  • 1 kudos

Pluscustomer-academy.databricks.comaccounts.cloud.databricks.comdatabricks.my.site.com

  • 1 kudos
HiraNisar
by New Contributor
  • 1137 Views
  • 0 replies
  • 0 kudos

AutoML in production

I have a workflow in Databricks and an AutoML pipeline in it.I want to deploy that pipeline in production, but I want to use the shared cluster in production, since AutoML is not compatible with the shared clusters, what can be the workaround.(Is it ...

  • 1137 Views
  • 0 replies
  • 0 kudos
Ajay-Pandey
by Esteemed Contributor III
  • 4548 Views
  • 3 replies
  • 1 kudos

Resolved! Databricks Private Preview Features

Hi All, I just wanted to try the new Databricks Workflow private preview features (For Each Task).Can someone please guide me on how we can enable it in our workspace as I am having the same use case in my current project where this feature can help ...

databricks_workflow.gif
Get Started Discussions
Databricks
dataengineering
privatepreview
  • 4548 Views
  • 3 replies
  • 1 kudos
Latest Reply
Ajay-Pandey
Esteemed Contributor III
  • 1 kudos

If you are using Azure Databricks just raise a support request regarding Private Preview they will enable it for you !

  • 1 kudos
2 More Replies
Benedetta
by New Contributor III
  • 9907 Views
  • 4 replies
  • 0 kudos

My notebooks running in parallel no longer include jobids

Hey Databricks - what happened to the jobids that used to be returned from parallel runs? We used them to identify which link matched the output. See attached. How are we supposed to match up the links? 

  • 9907 Views
  • 4 replies
  • 0 kudos
Latest Reply
Benedetta
New Contributor III
  • 0 kudos

Hey @Databricks, @Retired_mod  - waccha doing? Yesterday's "newer version of the app" that got rolled out seems to have broken the parallel runs. The ephemeral notebook is missing. The job ids are missing. What's up? Benedetta

  • 0 kudos
3 More Replies
yatharth
by New Contributor III
  • 1792 Views
  • 0 replies
  • 0 kudos

Unable to build LZO-codec

Hi Community i am try to create lzo-codec in my dbfs using:https://docs.databricks.com/en/_extras/notebooks/source/init-lzo-compressed-files.htmlbut i am facing the errorCloning into 'hadoop-lzo'... The JAVA_HOME environment variable is not defined c...

  • 1792 Views
  • 0 replies
  • 0 kudos
databricks0601
by New Contributor III
  • 1852 Views
  • 1 replies
  • 1 kudos

Looking for a promo code for Databricks certification

I went through the webinar suggested and the courses mentioned, but I have not received any voucher code for the certification. Can anyone please help. Thank you so much. 

  • 1852 Views
  • 1 replies
  • 1 kudos
Latest Reply
databricks0601
New Contributor III
  • 1 kudos

Thank you. The link is broken in the response for ticketing portal. I have added a ticket with the help center. Kindly let me know if anything else is needed. Appreciate the help.

  • 1 kudos
amitpphatak
by New Contributor II
  • 2208 Views
  • 1 replies
  • 0 kudos

LLM Chatbot With Retrieval Augmented Generation (RAG)

When executing the second block under 01-Data-Preparation-and-IndexI get the following error. Please help. AnalysisException: [RequestId=c9625879-339d-45c6-abb5-f70d724ddb47 ErrorClass=INVALID_STATE] Metastore storage root URL does not exist.Please p...

Get Started Discussions
catalog
llm-rag-chatbot
metastore
storage location
  • 2208 Views
  • 1 replies
  • 0 kudos
Latest Reply
amitpphatak
New Contributor II
  • 0 kudos

I fixed this issue by providing a MANAGED LOCATION for the catalog - meant updating the _resources/00-init file as follows - spark.sql(f"CREATE CATALOG IF NOT EXISTS {catalog} MANAGED LOCATION '<location path>'")

  • 0 kudos
Benedetta
by New Contributor III
  • 4684 Views
  • 3 replies
  • 0 kudos

Resolved! My Global Init Script doesn't run with the new(er) LTS version 13.3. It runs great on 12.2LTS

Hey Databricks,     Seems like you changed the way Global Init Scripts work.  How come you changed it? My Global Init Script runs great on 12.2LTS but not on the new(er) LTS version 13.3. We don't have Unity Catalog turned on. What's up with that? Ar...

  • 4684 Views
  • 3 replies
  • 0 kudos
Latest Reply
Benedetta
New Contributor III
  • 0 kudos

Thank you ChloeBors - I tried upgrading the ubuntu version per your suggestion but got a new error "Can t open lib ODBC Driver 17 for SQL Server file not found 0 SQLDriverConnect". I tried modifying this line to 18:  ACCEPT_EULA=Y apt-get install mso...

  • 0 kudos
2 More Replies
Pbr
by New Contributor
  • 4005 Views
  • 0 replies
  • 0 kudos

How to save a catalog table as a spark or pandas dataframe?

HelloI have a table in my catalog, and I want to have it as a pandas or spark df. I was using this code to do that before, but I don't know what is happened recently that the code is not working anymore. from pyspark.sql import SparkSession spark = S...

  • 4005 Views
  • 0 replies
  • 0 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels