cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

397973
by New Contributor III
  • 870 Views
  • 1 replies
  • 1 kudos

Is it possible to concatenate two notebooks?

I don't think it's possible but I thought I would check. I need to combine notebooks. While developing I might have code in various notebooks. I read them in with "%run".Then when all looks good I combine many cells into fewer notebooks. Is there any...

  • 870 Views
  • 1 replies
  • 1 kudos
Latest Reply
Alberto_Umana
Databricks Employee
  • 1 kudos

Hi @397973, Combining multiple notebooks into a single notebook isn't an out-of-the-box feature, but will try to combine %run commands ando output them to see if it works, sort of like: %run "/path/to/notebook1"%run "/path/to/notebook2"

  • 1 kudos
ramyav7796
by New Contributor II
  • 1307 Views
  • 2 replies
  • 1 kudos

Databricks Lakehouse Monitoring

Hi,I am trying to implement lakehouse monitoring using Inference profile for my inference data that I have, I see that when I create the monitor, two tables get generated profile and drift, I wanted to understand how are these two tables generating a...

  • 1307 Views
  • 2 replies
  • 1 kudos
Latest Reply
Louis_Frolio
Databricks Employee
  • 1 kudos

When you create a Databricks Lakehouse Monitoring monitor with an Inference profile, the system automatically generates two metric tables: a profile metrics table and a drift metrics table. Here's how this process works: Background Processing When yo...

  • 1 kudos
1 More Replies
ShivangiB
by New Contributor III
  • 1607 Views
  • 2 replies
  • 0 kudos

Liquid Clustering Key Change Question

If i already have a cluster key1 for existing table, i want to change cluster key to key2 using ALTER TABLE table CLUSTER BY (key2), then run OPTIMIZE table, based on databrick document , existing files will not be rewritten (verified by my test as w...

  • 1607 Views
  • 2 replies
  • 0 kudos
Latest Reply
lingareddy_Alva
Honored Contributor III
  • 0 kudos

@ShivangiB You're correct in your understanding. When you change a clustering key using ALTER TABLE followed by OPTIMIZE, it doesn't automatically recluster existing data. Let me explain why this happens and what options you have.In Delta Lake (which...

  • 0 kudos
1 More Replies
HarryRichard08
by New Contributor II
  • 806 Views
  • 1 replies
  • 0 kudos

Unable to Access S3 from Serverless but Works on Cluster

Hi everyone,I am trying to access data from S3 using an access key and secret. When I run the code through Databricks clusters, it works fine. However, when I try to do the same from a serverless cluster , I am unable to access the data.I have alread...

  • 806 Views
  • 1 replies
  • 0 kudos
Latest Reply
Advika
Community Manager
  • 0 kudos

Hello @HarryRichard08! It looks like this post duplicates the one you recently posted. A response has already been provided to the Original post. I recommend continuing the discussion in that thread to keep the conversation focused and organized.

  • 0 kudos
Fasih_Ahmed
by New Contributor III
  • 4374 Views
  • 4 replies
  • 0 kudos

Resolved! Exam suspended due to sudden power cut

Hi @Cert-Team   I hope this message finds you well. I am writing to request a review of my recently suspended exam. I believe that my situation warrants reconsideration, and I would like to provide some context for your understanding.I applied for Da...

  • 4374 Views
  • 4 replies
  • 0 kudos
Latest Reply
Cert-Bricks
Databricks Employee
  • 0 kudos

This has been resolved. 

  • 0 kudos
3 More Replies
Lackshu
by New Contributor II
  • 2601 Views
  • 2 replies
  • 0 kudos

Workspace Assignment Issue via REST API

I’m relying on workspace assignment via REST API to have the account user created in the workspace. This is like the workspace assignment screen at account level or adding existing user screen at workspace level. The reference URL is below.Workspace ...

  • 2601 Views
  • 2 replies
  • 0 kudos
Latest Reply
Lackshu
New Contributor II
  • 0 kudos

It turns out, the problem is the documentation. It says that the permission parameter (that's supplied in) is an array of strings. It really just expects a string, either UNKNOWN, USER, or ADMIN. It would be great if the team could fix the documentat...

  • 0 kudos
1 More Replies
gauravmahajan
by New Contributor II
  • 1134 Views
  • 3 replies
  • 0 kudos

Require Information on SQL Analytics DBU Cluster

Hello TeamWe are seeking cost information as we have noticed fluctuations in the daily costs for the "SQL Analytics DBU." We would like to understand the reasons behind the daily cost differences, even though the workload remains consistent.trying to...

  • 1134 Views
  • 3 replies
  • 0 kudos
Latest Reply
Nivethan_Venkat
Valued Contributor
  • 0 kudos

Hi @gauravmahajan,Most of the cost / DBU used can be retrieved from System tables across your different workspaces in a databricks account. Details related to job compute types and it's associated cost can be fetched from the queries mentioned in the...

  • 0 kudos
2 More Replies
kasuskasus1
by New Contributor III
  • 972 Views
  • 2 replies
  • 0 kudos

Is there a way to install hail on cluster?

Hi all!Been trying to install hail (https://hail.is/) on databricks with no luck so far. Is there an easy way to make it work? So far I could not get further than (providing sparkContext like `hl.init(sc=spark.sparkContext` also did not help):import ...

  • 972 Views
  • 2 replies
  • 0 kudos
Latest Reply
SriramMohanty
Databricks Employee
  • 0 kudos

you can run "pip install hail" on notebook cell.

  • 0 kudos
1 More Replies
BS_THE_ANALYST
by Esteemed Contributor III
  • 5078 Views
  • 10 replies
  • 19 kudos

Resolved! Databricks Demos

I'm looking to build or select a demo in Databricks. Has anyone found any of the particular Databricks demos to deliver a "wow" factor. I am new to Databricks and I'm looking to use one of the staple demos if possible.All the best,BS 

  • 5078 Views
  • 10 replies
  • 19 kudos
Latest Reply
Rjdudley
Honored Contributor
  • 19 kudos

>  Has anyone found any of the particular Databricks demos to deliver a "wow" factor.Yes, in fact the last two sprints I did POCs starting with Databricks' AI demos.  First, who is your audience--business users, or other technology people?  They'll b...

  • 19 kudos
9 More Replies
SB93
by New Contributor II
  • 1177 Views
  • 2 replies
  • 0 kudos

Delta Live Table Pipeline

I have a pipeline that has given me no problems up until today with the following error message:com.databricks.pipelines.common.errors.deployment.DeploymentException: Failed to launch pipeline cluster 0307-134831-tgq587us: Attempt to launch cluster w...

  • 1177 Views
  • 2 replies
  • 0 kudos
Latest Reply
lingareddy_Alva
Honored Contributor III
  • 0 kudos

@SB93 The error message you are seeing indicates that the cluster failed to launch because the Spark driver was unresponsive, with possible causes being library conflicts, incorrect metastore configuration, or other configuration issues. Given that t...

  • 0 kudos
1 More Replies
Phani1
by Databricks MVP
  • 7902 Views
  • 5 replies
  • 1 kudos

Azure Synapse vs Databricks

 Hi team,Could you kindly provide your perspective on the cost and performance comparison between Azure Synapse and Databricks SQL Warehouse/serverless, as well as their respective use cases? Thank you.

  • 7902 Views
  • 5 replies
  • 1 kudos
Latest Reply
Witold
Honored Contributor
  • 1 kudos

@Suncat There hasn't been any major changes for than a year: https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-version-support E.g. I don't believe we will see support for Spark 3.5 at all. At least, apparently it's support...

  • 1 kudos
4 More Replies
Rachana2
by New Contributor II
  • 1103 Views
  • 3 replies
  • 0 kudos

Databricks lineage

Hello,I am trying to get the table lineage i.e upstreams and downstreams of all tables in unity catalog into my local database using API calls. I need my db to be up to date, if the lineage is updated in one of the in databricks, i have to update sam...

  • 1103 Views
  • 3 replies
  • 0 kudos
Latest Reply
SantoshJoshi
New Contributor III
  • 0 kudos

Hi @Rachana2,As @Alberto_Umana has mentioned I'd check table_lineage / column_lineage tables, as maintaining a lineage through a bespoke pipeline/tooling may not be a right approach.Can you please explain your use case which explains why you don't wa...

  • 0 kudos
2 More Replies
joseroca99
by New Contributor II
  • 2913 Views
  • 6 replies
  • 0 kudos

Resolved! File found with %fs ls but not with spark.read

Code: wikipediaDF = (spark.read  .option("HEADER", True)  .option("inferSchema", True)  .csv("/databricks-datasets/wikipedia-datasets/data-001/pageviews/raw/pageviews_by_second.tsv"))display(bostonDF) Error: Failed to store the result. Try rerunning ...

  • 2913 Views
  • 6 replies
  • 0 kudos
Latest Reply
xx123
New Contributor III
  • 0 kudos

I have the exact same issue. Seems like limiting the the display() method works as a temporary solution, but I wonder if there's any long term one. The idea would be to have the possibility of displaying larger datasets within a notebook. How to achi...

  • 0 kudos
5 More Replies
j_h_robinson
by New Contributor II
  • 1226 Views
  • 1 replies
  • 1 kudos

Resolved! Spreadsheet-Like UI for Databricks

We are currently entering data into Excel and then uploading it into Databricks.  Is there a built-in spreadsheet-like UI within Databricks that can update data directly in Databricks? 

  • 1226 Views
  • 1 replies
  • 1 kudos
Latest Reply
Advika_
Databricks Employee
  • 1 kudos

Hello, @j_h_robinson! Databricks doesn’t have a built-in spreadsheet-like UI for direct data entry or editing. Are you manually uploading the Excel files or using an ODBC driver setup? If you’re doing it manually, you might find this helpful: Connect...

  • 1 kudos
h_h_ak
by Contributor
  • 6829 Views
  • 5 replies
  • 2 kudos

Resolved! Understanding Autoscaling in Databricks: Under What Conditions Does Spark Add a New Worker Node?

I’m currently working with Databricks autoscaling configurations and trying to better understand how Spark decides when to spin up additional worker nodes. My cluster has a minimum of one worker and can scale up to five. I know that tasks are assigne...

  • 6829 Views
  • 5 replies
  • 2 kudos
Latest Reply
filipniziol
Esteemed Contributor
  • 2 kudos

Hi @h_h_ak ,Short Answer:Autoscaling primarily depends on the number of pending tasks.Workspaces on the Premium plan use optimized autoscaling, while those on the Standard plan use standard autoscaling.Long Answer:Databricks autoscaling responds main...

  • 2 kudos
4 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels