cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Borkadd
by New Contributor II
  • 1569 Views
  • 1 replies
  • 0 kudos

Multi Task Job creation through Pulumi

I am trying to create a multi-task Databricks Job in Azure Cloud with its own cluster.Although I was able to create a single task job without any issues, the code to deploy the multi-task job fails due to the following cluster validation error:error:...

  • 1569 Views
  • 1 replies
  • 0 kudos
Latest Reply
Borkadd
New Contributor II
  • 0 kudos

Hello @Retired_mod, thanks for your answer, but the problem keeps the same. I had already tested with different cluster configurations, single-node and multi-node, including those cluster configurations which worked with single task jobs, but the err...

  • 0 kudos
Aria
by New Contributor III
  • 8147 Views
  • 2 replies
  • 2 kudos

Databricks Asset bundle

Hi,I am new to databricks, We are trying to use Databricks asset bundles for code deployment .I have spect a lot of time but still so many things are not clear to me.Can we change the target path of the notebooks deployed from /shared/.bundle/* to so...

  • 8147 Views
  • 2 replies
  • 2 kudos
Latest Reply
Еmil
New Contributor III
  • 2 kudos

Hi @Retired_mod,Thank you for your post, I thought it will solve my issues too, however after reading your suggestion it was nothing new for me, because l already have done exactly that.. Here is what I have dome so you or anyone can replicate it:1. ...

  • 2 kudos
1 More Replies
Phani1
by Valued Contributor II
  • 488 Views
  • 0 replies
  • 0 kudos

Auto Loader notebook for multiple tables

Hi Team,My requirement is ,i do have File A from source A which needs to write into Multiple Delta tables i.e DeltaTableA,DeltaTableB,DeltaTableC. Is it possible to have a single instance of an autoloader script. (multiple write streams). Could you p...

  • 488 Views
  • 0 replies
  • 0 kudos
Phani1
by Valued Contributor II
  • 1991 Views
  • 0 replies
  • 0 kudos

Unity catalog Migration

Hi Team,Could you please help me to understand,  1)Why we need to migrate Unity catalog? if we are not migrating what benefits we will not get?2) How to migrate Unity catalog (What all are objects needs to migrate and any tool) ? Regards,Phanindra

  • 1991 Views
  • 0 replies
  • 0 kudos
margutie
by New Contributor
  • 865 Views
  • 0 replies
  • 0 kudos

Error from Knime trought proxy

I want to connect to Databricks from Knime on a company computer that uses a proxy. The error I'm encountering is as follows: ERROR Create Databricks Environment 3:1 Execute failed: Could not open the client transport with JDBC URI: jdbc:hive2://adb-...

  • 865 Views
  • 0 replies
  • 0 kudos
Sreekanth_N
by New Contributor II
  • 2632 Views
  • 2 replies
  • 0 kudos

'NotebookHandler' object has no attribute 'setContext' in pyspark streaming in AWS

I am facing issue while calling dbutils.notebook.run() inside of pyspark streaming with concurrent.executor. At first the error is "pyspark.sql.utils.IllegalArgumentException: Context not valid. If you are calling this outside the main thread,you mus...

  • 2632 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kevin3
New Contributor III
  • 0 kudos

The error message you're encountering in PySpark when using dbutils.notebook.run() suggests that the context in which you are attempting to call the run() method is not valid. PySpark notebooks in Databricks have certain requirements when it comes to...

  • 0 kudos
1 More Replies
Khushisi
by New Contributor II
  • 561 Views
  • 0 replies
  • 0 kudos

Databricks to make a machine learning model

Hey all,I've been using a voice cloning AI and it's working well. I'm thinking of using Databricks to make a machine learning model for speech tech. I want to start with personal content creation. Any tips or advice would be great!

  • 561 Views
  • 0 replies
  • 0 kudos
Phani1
by Valued Contributor II
  • 1387 Views
  • 0 replies
  • 0 kudos

Billing usage per user

Hi Team ,Unity catalog is not enabled in our workspace, We would like to know the billing usage information per user ,could you please help us how to get these details( by using notebook level script).Regards,Phanindra

  • 1387 Views
  • 0 replies
  • 0 kudos
anandreddy23
by New Contributor II
  • 4036 Views
  • 1 replies
  • 0 kudos

unpersist doesn't clear

from pyspark.sql import SparkSessionfrom pyspark import SparkContext, SparkConffrom pyspark.storagelevel import StorageLevelspark = SparkSession.builder.appName('TEST').config('spark.ui.port','4098').enableHiveSupport().getOrCreate()df4 = spark.sql('...

  • 4036 Views
  • 1 replies
  • 0 kudos
Latest Reply
anandreddy23
New Contributor II
  • 0 kudos

Thank you so much for taking time and explaining the concepts

  • 0 kudos
MFrandsen
by New Contributor
  • 864 Views
  • 0 replies
  • 0 kudos

Question for exam project

For my exam i have to do a small project for the company im interning at. I am creating a datawarehouse where i will have to transfer data from another database, and then transforming it to a star schema. would databricks be good for this, or is it t...

  • 864 Views
  • 0 replies
  • 0 kudos
Phani1
by Valued Contributor II
  • 852 Views
  • 2 replies
  • 0 kudos

checklist for : process to move and deploy in the prod

Hi Team,Could you please help me with best practices to move and deploy (code, workspace, notebooks, etc) in the prod?Regards,Phanindra

  • 852 Views
  • 2 replies
  • 0 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 0 kudos

the most important is to use Repos!Link your workspace with git and use feature branches and pull requests to promote code/notebooks.Check the databricks docs on Repos.  If you have further questions; shoot.

  • 0 kudos
1 More Replies
Phani1
by Valued Contributor II
  • 1723 Views
  • 1 replies
  • 0 kudos

Archival Strategy for Delta tables

 Hi Team, We would like to define the archival strategy for data. Could you please share best practices /guide me on the below are the 3 use cases Case-1: On-Prem SQL and Oracle Data which is more than 20 years and they wanted to bring them into clou...

  • 1723 Views
  • 1 replies
  • 0 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 0 kudos

case 1: I'd extract the data from the db to a data lake (cold storage if that is possible, that is cheaper) using an ETL tool like Data Factory, Glue etc.  Then the archiving can take place.  Perhaps also create a backup of the data on a 2nd data lak...

  • 0 kudos
Phani1
by Valued Contributor II
  • 2063 Views
  • 2 replies
  • 0 kudos

Databricks setup/deployment checklist/best practices

Hi Team, could you please share or guide us on any checklist/best practices for Databricks setup/deployment?

  • 2063 Views
  • 2 replies
  • 0 kudos
Latest Reply
icyflame92
New Contributor II
  • 0 kudos

Hi @Phani1 , here are some best practices https://github.com/Azure/AzureDatabricksBestPractices/tree/master and you could take these points as your "checklist".Choose the right Databricks Workspace:Decide on the appropriate Azure region for your Data...

  • 0 kudos
1 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels