cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

NhanNguyen
by Contributor III
  • 3072 Views
  • 2 replies
  • 1 kudos

Resolved! Cannot create delta location with mount path

Hi all,I'm trying to create a Table but cannot use a predifined mount path like '/mnt/silver/' but if i use a full path of azure blob container it will create susscessfully like this:`CREATE TABLE IF NOT EXISTS nhan_databricks.f1_processed.circuits (...

  • 3072 Views
  • 2 replies
  • 1 kudos
Latest Reply
NhanNguyen
Contributor III
  • 1 kudos

Oh thanks for you answer, actually I'm using Unity Catalog

  • 1 kudos
1 More Replies
Hetnon
by New Contributor II
  • 1923 Views
  • 1 replies
  • 0 kudos

Can't Create a Workspace using Google Cloud

Trying to create my first workspace. I hit create my space and I see 3 buckets being created on my GCP, but nothing shows up in the actual 'workspaces' in my databricks console. the only thing is the  'create workspace' button' also, there is no erro...

  • 1923 Views
  • 1 replies
  • 0 kudos
Surajv
by New Contributor III
  • 4891 Views
  • 2 replies
  • 0 kudos

Getting client.session.cache.size warning in pyspark code using databricks connect

Hi Community, I have setup a jupyter notebook in a server and installed databricks connect in its kernel to leverage my databricks cluster compute in the notebook and write pyspark code. Whenever I run my code it gives me below warning: ```WARN Spark...

  • 4891 Views
  • 2 replies
  • 0 kudos
Latest Reply
Surajv
New Contributor III
  • 0 kudos

Thank you @Riyakh 

  • 0 kudos
1 More Replies
anonymous_567
by New Contributor II
  • 1715 Views
  • 1 replies
  • 0 kudos

Ingesting Non-Incremental Data into Delta

Hello,I have non-incremental data landing in a storage account. This data contains old data from before as well as new data. I would like to avoid doing a complete table deletion and table creation just to upload the data from storage and have an upd...

  • 1715 Views
  • 1 replies
  • 0 kudos
Latest Reply
AmanSehgal
Honored Contributor III
  • 0 kudos

Well, if you know the conditions to separate new data from old data, then while reading the data in to your dataframe, use filter or where clause to select new data and ingest it in to your delta table.This is how you can do in general. But if you ha...

  • 0 kudos
Olfa_Kamli
by New Contributor III
  • 1616 Views
  • 1 replies
  • 0 kudos

log delivery are not creating data in s3 bucket

Hiii, Does anyone have an idea about the typical duration for Databricks to create logs in an S3 bucket using the databricks_mws_log_delivery Terraform resource? I've implemented the code provided in the Databricks official documentation, but I've be...

  • 1616 Views
  • 1 replies
  • 0 kudos
Latest Reply
Olfa_Kamli
New Contributor III
  • 0 kudos

The issue has been resolved. There was no problem with the code or the API. However, it took over 12 hours for logs to start appearing in my bucket, despite Databricks documentation indicating that logs should appear within 1 hour..Thank you!

  • 0 kudos
TheIceBrick
by New Contributor III
  • 14783 Views
  • 3 replies
  • 1 kudos

Is there a (request-) size limit for the Databricks Rest Api Sql statements?

When inserting rows through the Sql Api (/api/2.0/sql/statements/), when more than a certain number of records (about 25 records with 8 small columns) are included in the statement, the call fails with the error:"The request could not be processed by...

Get Started Discussions
REST API
Sql Statements
  • 14783 Views
  • 3 replies
  • 1 kudos
Latest Reply
ChrisCkx
New Contributor II
  • 1 kudos

@TheIceBrick did you find out anything else about this?I am experiencing exactly the same, I can insert up to 35 rows but break at about 50 rows.The payload size is 42KB, I am passing parameters for each row.@Debayan This is no where near the 16MiB /...

  • 1 kudos
2 More Replies
jenshumrich
by Contributor
  • 12351 Views
  • 2 replies
  • 0 kudos

Long running jobs get lost

Hello,I tried to schedule a long running job and surprisingly it does seem to neither terminate (and thus does not let the cluster shut down), nor continue running, even though the state is still "Running":But the truth is that the job has miserably ...

jenshumrich_0-1712742957610.png jenshumrich_2-1712743008070.png jenshumrich_3-1712743098546.png
  • 12351 Views
  • 2 replies
  • 0 kudos
Latest Reply
Lakshay
Databricks Employee
  • 0 kudos

Have you looked at the sql plan to see what the  spark job 72 was doing?

  • 0 kudos
1 More Replies
chari
by Contributor
  • 2937 Views
  • 2 replies
  • 0 kudos

Reading csv file with spark throws [insufficient privelage] error

Hello Community,I have some csv files saved in databricks workspace and want to read them with spark. I make use of the commanddf = spark.read.format('csv').load(r'filepath') However, it throws the error.org.apache.spark.SparkSecurityException: [INSU...

  • 2937 Views
  • 2 replies
  • 0 kudos
Latest Reply
Lakshay
Databricks Employee
  • 0 kudos

If this a UC enabled workspace, you need to provide the right access.

  • 0 kudos
1 More Replies
Ajay-Pandey
by Databricks MVP
  • 5261 Views
  • 3 replies
  • 2 kudos

Resolved! Update regarding Community Reward Store

Hi Team,Is there any update on the Community Reward Store, as it's been discontinued from the old portal, and we still can't see the new portal for that.Is there any expected date when this will be available for community members?

  • 5261 Views
  • 3 replies
  • 2 kudos
Latest Reply
Ajay-Pandey
Databricks MVP
  • 2 kudos

Thanks for update.

  • 2 kudos
2 More Replies
chloeh
by New Contributor II
  • 1513 Views
  • 1 replies
  • 0 kudos

Using SQL for Structured Streaming

Hi!I'm new to Databricks. I'm trying to create a data pipeline with structured streaming. A minimal example data pipeline would look like: read from upstream Kafka source, do some data transformation, then write to downstream Kafka sink. I want to do...

  • 1513 Views
  • 1 replies
  • 0 kudos
Latest Reply
chloeh
New Contributor II
  • 0 kudos

Ok I figured out why I was getting an error on the usage of `read_kafka`. My default cluster was set up with the wrong Databricks runtime

  • 0 kudos
liormayn
by New Contributor III
  • 2478 Views
  • 0 replies
  • 3 kudos

Error while encoding: java.lang.RuntimeException: org.apache.spark.sql.catalyst.util.GenericArrayDa

Hello:)we are trying to run an existing working flow that works currently on EMR, on databricks.we use LTS 10.4, and when loading the data we get the following error:at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:...

  • 2478 Views
  • 0 replies
  • 3 kudos
RakeshRakesh_De
by New Contributor III
  • 4751 Views
  • 1 replies
  • 0 kudos

if any user has only permission 'select table' in unityCatalog but not having permission to ext loc

Hi,Suppose one use having access 'Select' permission the table but user not having any permission to table external location in the 'external location'..  User will be able to read the data from table?? if yes how can user will be able to read the wh...

  • 4751 Views
  • 1 replies
  • 0 kudos
Latest Reply
RakeshRakesh_De
New Contributor III
  • 0 kudos

Hi @Retired_mod , thanks for response.. Why the hyperlink command not showing full?

  • 0 kudos
RobinK
by Contributor
  • 3614 Views
  • 5 replies
  • 0 kudos

How to switch Workspaces via menue

Hello,In various webinars and videos featuring Databricks instructors, I have noticed that it is possible to switch between different workspaces using the top menu within a workspace. However, in our organization, we have three separate workspaces wi...

  • 3614 Views
  • 5 replies
  • 0 kudos
Latest Reply
Rajani
Contributor II
  • 0 kudos

Hi @RobinK looking at screenshots provided i can see you have access to different workspaces but still the dropdown is not visible for you, i also checked if there is any setting for same but i didnt found it.you can raise a ticket to databricks and ...

  • 0 kudos
4 More Replies
dustint121
by New Contributor II
  • 3927 Views
  • 1 replies
  • 1 kudos

Resolved! Issue with creating cluster on Community Edition

I have recently signed up for Databricks Community Edition and have yet to succesfully create a cluster.I get this message when trying to create a cluster:"Self-bootstrap failure during launch. Please try again later and contact Databricks if the pro...

  • 3927 Views
  • 1 replies
  • 1 kudos
Latest Reply
Ajay-Pandey
Databricks MVP
  • 1 kudos

Hi @dustint121 It's Databricks internal issue; wait for some time and it will resolve.

  • 1 kudos
halox6000
by New Contributor III
  • 4381 Views
  • 3 replies
  • 1 kudos

Resolved! Databricks community edition down?

I am getting this error when trying to create a cluster: "Self-bootstrap failure during launch. Please try again later and contact Databricks if the problem persists. Node daemon fast failed and did not answer ping for instance"

  • 4381 Views
  • 3 replies
  • 1 kudos
Latest Reply
dustint121
New Contributor II
  • 1 kudos

I still have this issue, and have yet to successfully create a cluster instance.Please advise on how this error was fixed.

  • 1 kudos
2 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels