cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

TinasheChinyati
by New Contributor III
  • 3110 Views
  • 0 replies
  • 0 kudos

Stream to stream join NullPointerException

I have a DLT pipeline running in continous mode. I have a stream to stream join which runs for the first 5hrs but then fails with a Null Pointer Exception. I need assistance to know what I need to do to handle this. my code is structured as below:@dl...

  • 3110 Views
  • 0 replies
  • 0 kudos
Ikanip
by New Contributor II
  • 5435 Views
  • 4 replies
  • 2 kudos

Resolved! How to choose a compute, and how to find alternatives for the current compute being used?

We are using a compute for an Interactive Cluster in Production which incurs X amount of cost. We want to know what are the options available to use with near about the same processing power as the current compute but incur a cost of Y, which is less...

  • 5435 Views
  • 4 replies
  • 2 kudos
Latest Reply
raphaelblg
Databricks Employee
  • 2 kudos

Hello @Ikanip , You can utilize the Databricks Pricing Calculator to estimate costs. For detailed information on compute capacity, please refer to your cloud provider's documentation regarding Virtual Machine instance types.

  • 2 kudos
3 More Replies
scottbisaillon
by New Contributor
  • 1535 Views
  • 0 replies
  • 0 kudos

Databricks Running Jobs and Terraform

What happens to a currently running job when a workspace is deployed again using Terraform? Are the jobs paused/resumed, or are they left unaffected without any down time? Searching for this specific scenario doesn't seem to come up with anything and...

  • 1535 Views
  • 0 replies
  • 0 kudos
mh_db
by New Contributor III
  • 11167 Views
  • 5 replies
  • 1 kudos

Job parameters to get date and time

I'm trying to set up a workflow in databricks and I need my job parameter to get the date and time. I see in the documentation there's some options for dynamic values.I'm trying to use this one: {{job.start_time.[argument]}}For the "argument" there, ...

  • 11167 Views
  • 5 replies
  • 1 kudos
Latest Reply
brockb
Databricks Employee
  • 1 kudos

Then please change the code to:```iso_datetime = dbutils.widgets.get("LoadID")```

  • 1 kudos
4 More Replies
Abhay_1002
by New Contributor
  • 905 Views
  • 0 replies
  • 0 kudos

Archive file support in Jar Type application

In my spark application, I am using set of python libraries. I am submitting spark application as Jar Task. But I am not able to find any option provide Archive Files.So, in order to handle python dependencies, I am using approach:Create archive file...

  • 905 Views
  • 0 replies
  • 0 kudos
Surajv
by New Contributor III
  • 1331 Views
  • 0 replies
  • 0 kudos

Getting databricks-connect com.fasterxml.jackson.databind.exc.MismatchedInputException parse warning

Hi community, I am getting below warning when I try using pyspark code for some of my use-cases using databricks-connect. Is this a critical warning, and any idea what does it mean?Logs: WARN DatabricksConnectConf: Could not parse /root/.databricks-c...

  • 1331 Views
  • 0 replies
  • 0 kudos
Surajv
by New Contributor III
  • 12498 Views
  • 1 replies
  • 0 kudos

Getting python version errors when using pyspark rdd using databricks connect

Hi community, When I use pyspark rdd related functions in my environment using databricks connect, I get below error: Databricks cluster version: 12.2. `RuntimeError: Python in worker has different version 3.9 than that in driver 3.10, PySpark cannot...

  • 12498 Views
  • 1 replies
  • 0 kudos
Latest Reply
Surajv
New Contributor III
  • 0 kudos

Got it. As a side note, I tried above methods, but the error persisted, hence upon reading docs again, there was this statement: You must install Python 3 on your development machine, and the minor version of your client Python installation must be t...

  • 0 kudos
Hubcap7700
by New Contributor II
  • 2552 Views
  • 0 replies
  • 1 kudos

Native Slack Integration

Hi,Are there any plans to build native slack integration? I'm envisioning a one-time connector to Slack that would automatically populate all channels and users to select to use for example when configuring an alert notification. It is does not seem ...

  • 2552 Views
  • 0 replies
  • 1 kudos
ymt
by New Contributor II
  • 2426 Views
  • 0 replies
  • 1 kudos

connection from databricks to snowflake using OKTA

Hi team,This is how I connect to Snowflake from Jupyter Notebook:import snowflake.connector snowflake_connection = snowflake.connector.connect( authenticator='externalbrowser', user='U1', account='company1.us-east-1', database='db1',...

  • 2426 Views
  • 0 replies
  • 1 kudos
sharpbetty
by New Contributor II
  • 4806 Views
  • 2 replies
  • 1 kudos

Workflows: Running dependent task despite earlier task fail

I have a scheduled task running in workflow.Task 1 computes some parameters then these are picked up by a dependent reporting task: Task 2.I want Task 2 to report "Failure" if Task 1 fails. Yet creating a dependency in workflows means that Task 2 wil...

Get Started Discussions
tasks
Workflows
  • 4806 Views
  • 2 replies
  • 1 kudos
Latest Reply
NerdSan
New Contributor II
  • 1 kudos

Hi @sharpbetty , Any suggestions how I can keep the parameter sharing and dependency from Task 1 to Task 2, yet also allow Task 2 to fire even on failure of Task 1?Setup:Task 2 dependent on Task1 Challenge: To Fire Task 2 even on Task 1 FailureSoluti...

  • 1 kudos
1 More Replies
Abhay_1002
by New Contributor
  • 2407 Views
  • 1 replies
  • 0 kudos

Issue with Python Package Management in Spark application

In a pyspark application, I am using set of python libraries. In order to handle python dependencies while running pyspark application, I am using the approach provided by spark : Create archive file of Python virtual environment using required set o...

  • 2407 Views
  • 1 replies
  • 0 kudos
Latest Reply
NandiniN
Databricks Employee
  • 0 kudos

Hi, I have not tried it but based on the doc you have to go by this approach. ./environment/bin/pythonmust be replaced with the correct path. import os from pyspark.sql import SparkSession os.environ['PYSPARK_PYTHON'] = "./environment/bin/python" sp...

  • 0 kudos
Nagarathna
by New Contributor II
  • 3147 Views
  • 3 replies
  • 1 kudos

File not found error when trying to read json file from aws s3 using with open.

I am trying to reading json from aws s3 using with open in databricks notebook using shared cluster.Error message:No such file or directory:'/dbfs/mnt/datalake/input_json_schema.json'In single instance cluster the above error is not found.  

  • 3147 Views
  • 3 replies
  • 1 kudos
Latest Reply
NandiniN
Databricks Employee
  • 1 kudos

Hi @Nagarathna , I just tried it on a shared cluster and did not face any issue. What is the exact error that you are facing? Complete stacktrace might help. Just to confirm are you accessing the "/dbfs/mnt/datalake/input.json" from the same workspac...

  • 1 kudos
2 More Replies
databricksdev
by New Contributor II
  • 1777 Views
  • 2 replies
  • 0 kudos

Can we customize job run name when running azure data bricks notebook jobs from azure data factory

Hi All,we are executing databricks notebook activity  inside the child pipeline thru ADF. we are getting child pipeline name in job name while executing databricks job.  Is it possible to get master pipeline name as job name or customize job name thr...

  • 1777 Views
  • 2 replies
  • 0 kudos
Latest Reply
NandiniN
Databricks Employee
  • 0 kudos

I think we should raise a Request/Product Feedback. Not sure if it would be Databricks that would own it or Microsoft but you may submit feedback for Databricks here - https://docs.databricks.com/en/resources/ideas.html  

  • 0 kudos
1 More Replies
DataBricks_Use1
by New Contributor
  • 2472 Views
  • 2 replies
  • 0 kudos

FileReadException Error

Hi,I am getting FilereadException Error while reading JSON file using REST API Connector.It comes when data is huge in Json File and it's not able to handle more than 1 Lac records.Error details:org.apache.spark.SparkException: Job aborted due to sta...

  • 2472 Views
  • 2 replies
  • 0 kudos
Latest Reply
NandiniN
Databricks Employee
  • 0 kudos

Hello @DataBricks_Use1 , It would great if you could add the entire stack trace, as Jose mentioned. But there should be a "Caused by:" section below which would give you an idea of what's the reason for this failure and then you can work on that.  fo...

  • 0 kudos
1 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels