cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Nitin2
by New Contributor
  • 1386 Views
  • 0 replies
  • 0 kudos

Not able to login or change password

Hi,I am unable to login to databricks community edition. I have tried changing my password. However, no email is sent on my email id which is : kum.nit7287@gmail.com. Can anyone help?

  • 1386 Views
  • 0 replies
  • 0 kudos
Chalki
by New Contributor III
  • 7389 Views
  • 3 replies
  • 0 kudos

Iterative read and writes cause java.lang.OutOfMemoryError: GC overhead limit exceeded

I have an iterative algorithm which read and writes a dataframe iteration trough a list with new partitions, like this: for p in partitions_list:df = spark.read.parquet("adls_storage/p")df.write.format("delta").mode("overwrite").option("partitionOver...

  • 7389 Views
  • 3 replies
  • 0 kudos
Latest Reply
Chalki
New Contributor III
  • 0 kudos

@daniel_sahalI've attached the wrong snip/ Actually it is FULL GC Ergonomics, which was bothering me. Now I am attaching the correct snip.  But as you said I scaled a bit. The thing I forgot to mention is that the table is wide - more than 300 column...

  • 0 kudos
2 More Replies
Dekova
by New Contributor II
  • 4151 Views
  • 1 replies
  • 3 kudos

Resolved! Using DeltaTable.merge() and generating surrogate keys on insert?

I'm using merge to upsert data into a table:DeltaTable.forName(DESTINATION_TABLE).as("target").merge(merge_df.as("source") ,"source.topic = target.topic and source.key = target.key").whenMatched().updateAll().whenNotMatched().insertAll().execute()Id ...

  • 4151 Views
  • 1 replies
  • 3 kudos
Latest Reply
daniel_sahal
Databricks MVP
  • 3 kudos

@Dekova 1) uuid() is non-deterministic meaning that it will give you different result each time you run this function2) Per the documentation "For Databricks Runtime 9.1 and above, MERGE operations support generated columns when you set spark.databri...

  • 3 kudos
102842
by New Contributor II
  • 3558 Views
  • 3 replies
  • 2 kudos

Databricks SQL - Conditional Catalog query

Hi is there a way we can do%sqlselect * from {{ catalog }}.schema.tableWhere `{{ catalog }}` is a template variable extracted/evaluated from either an environment variable, a databricks secret, or somewhere else? (note: not a widget) 

  • 3558 Views
  • 3 replies
  • 2 kudos
Latest Reply
Tharun-Kumar
Databricks Employee
  • 2 kudos

Hi @102842 You can use query parameters to perform this - https://docs.databricks.com/sql/user/queries/query-parameters.htmlYou can define the catalog name as a query parameter. You should declare the catalog name parameter as a drop down list, becau...

  • 2 kudos
2 More Replies
bharath_db
by New Contributor II
  • 1621 Views
  • 1 replies
  • 0 kudos

Activation Email is not coming up in the email

Activation Email is not showing up in the email. I am not able to start my trial. @Sujitha or @Kaniz - Please help!  

bharath_db_0-1690315716716.png
  • 1621 Views
  • 1 replies
  • 0 kudos
Latest Reply
bharath_db
New Contributor II
  • 0 kudos

 @Sujitha or @Kaniz - Need your help regarding the validate email not reaching inbox / spam folder to activate trial.

  • 0 kudos
kurtrm
by New Contributor III
  • 5390 Views
  • 4 replies
  • 0 kudos

Import dbfs file into workspace using Python SDK

Hello,I am looking to replicate the functionality provided by the databricks_cli Python package using the Python SDK. Previously, using the databricks_cli WorkspaceApi object, I could use the import_workspace or import_workspace_dir methods to move a...

  • 5390 Views
  • 4 replies
  • 0 kudos
Latest Reply
Kratik
New Contributor III
  • 0 kudos

Even, I am looking for a way to bring files present in S3 to Workspace programmatically. 

  • 0 kudos
3 More Replies
alesventus
by Contributor
  • 1415 Views
  • 0 replies
  • 0 kudos

Big time differences in reading tables

When I read managed table in #databricks# i can see big differences in time spent. Small test table with just 2 records is once loaded in 3 seconds and another time in 30 seconds. Reading table_change for this tinny table took 15 minutes. Don't know ...

Get Started Discussions
performance issue
  • 1415 Views
  • 0 replies
  • 0 kudos
hokam
by New Contributor II
  • 4036 Views
  • 1 replies
  • 1 kudos

ENDPOINT_NOT_FOUND error coming for /2.0/clusters/list-zones API on Databricks running over GCP

Hi,I am trying to build the ETL data pipelines on databricks workspace that is running over GCP.For automated cluster creation, when I´m trying to access list availability zones REST API of cluster creation, then it is failing with end point not foun...

  • 4036 Views
  • 1 replies
  • 1 kudos
KVNARK
by Honored Contributor II
  • 4560 Views
  • 1 replies
  • 1 kudos

To enroll for featured member interview

What is the procedure to enroll ourselves into feature member interview.

  • 4560 Views
  • 1 replies
  • 1 kudos
Latest Reply
Sujitha
Databricks Employee
  • 1 kudos

Hello @KVNARK We appreciate your interest in becoming a part of our featured member recognition system. Regrettably, at the moment, enrollment is not possible through an application process. Instead, we identify the top contributors based on their ac...

  • 1 kudos
yzhang
by New Contributor III
  • 3291 Views
  • 2 replies
  • 4 kudos

Resolved! Is there a plan to support workflow jobs to be stored in a subfolder?

I have many workflow jobs created and they all in a flat list. Is there a way to create (kind of) sub folders that I can category my databricks workflow jobs into it (kind of organizer)...

  • 3291 Views
  • 2 replies
  • 4 kudos
Latest Reply
yzhang
New Contributor III
  • 4 kudos

@Anonymous thanks for the suggestion. And thanks @Vinay_M_R a lot for answering the question. The solution mentioned is doable but less optimized way to do. Everyone in the team has to follow the same rules especially for shared jobs, and sometimes n...

  • 4 kudos
1 More Replies
GrahamBricks
by New Contributor
  • 3565 Views
  • 0 replies
  • 0 kudos

terraform jobs depends_on

I am attempting to automate Jobs creation using Databrick Terraform provider.    I have a number of task that will "depends_on" each other and am trying to use dynamic content to do this.  Each task name is stored in a string array so looping over th...

  • 3565 Views
  • 0 replies
  • 0 kudos
CraiMacl_23588
by New Contributor
  • 855 Views
  • 0 replies
  • 0 kudos

Init scripts in legacy workspace (pre-E2)

Hello,I've got a legacy workspace (not E2) and I am trying to move my cluster scoped init script to the workspace area (from DBFS). It doesn't seem to be possible to store a shell script in the workspace area (Accepted formats: .dbc, .scala, .py, .sq...

  • 855 Views
  • 0 replies
  • 0 kudos
Phani1
by Databricks MVP
  • 3094 Views
  • 2 replies
  • 1 kudos

Resolved! Databricks SQL warehouse best practices

How best we can design Databricks SQL warehouse for multiple environments, and multiple data marts, is there any best practices or guidelines?

  • 3094 Views
  • 2 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hi @Phani1  We haven't heard from you since the last response from @Vinay_M_R , and I was checking back to see if her suggestions helped you. Or else, If you have any solution, please share it with the community, as it can be helpful to others.  Also...

  • 1 kudos
1 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels