cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Surajv
by New Contributor III
  • 56 Views
  • 1 replies
  • 0 kudos

Getting databricks-connect com.fasterxml.jackson.databind.exc.MismatchedInputException parse warning

Hi community, I am getting below warning when I try using pyspark code for some of my use-cases using databricks-connect. Is this a critical warning, and any idea what does it mean?Logs: WARN DatabricksConnectConf: Could not parse /root/.databricks-c...

  • 56 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi, @Surajv, The warning you’re encountering is related to using Databricks Connect with PySpark.  Databricks Connect: Databricks Connect is a Python library that allows you to connect your local development environment to a Databricks cluster. I...

  • 0 kudos
rafal_walisko
by New Contributor
  • 64 Views
  • 1 replies
  • 0 kudos

Optimal Strategies for downloading large query results with Databricks API

Hi everyone,I'm currently facing an issue with handling a large amount of data using the Databricks API. Specifically, I have a query that returns a significant volume of data, sometimes resulting in over 200 chunks.My initial approach was to retriev...

  • 64 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @rafal_walisko, Handling large volumes of data using the Databricks API can indeed be challenging, especially when dealing with numerous chunks.   Let’s explore some strategies that might help you optimize your approach: Rate Limits and Paral...

  • 0 kudos
ymt
by New Contributor
  • 63 Views
  • 1 replies
  • 0 kudos

connection from databricks to snowflake using OKTA

Hi team,This is how I connect to Snowflake from Jupyter Notebook:import snowflake.connector snowflake_connection = snowflake.connector.connect( authenticator='externalbrowser', user='U1', account='company1.us-east-1', database='db1',...

  • 63 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @ymt, It seems you’ve encountered an issue while connecting to Snowflake from your Databricks Notebook. The error message you received is: ImportError: cannot import name 'NamedTuple' from 'typing_extensions' (/databricks/python/lib/python3.9/s...

  • 0 kudos
rahuja
by New Contributor
  • 82 Views
  • 1 replies
  • 0 kudos

Py4JError: An error occurred while calling o992.resourceProfileManager

Hello I am trying to run the SparkXGBoostRegressor and I am getting the following error:SpoilerPy4JError: An error occurred while calling o992.resourceProfileManager. Trace: py4j.security.Py4JSecurityException: Method public org.apache.spark.resource...

  • 82 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @rahuja, The error you’re encountering might be related to the interaction between PySpark and XGBoost. Let’s explore some potential solutions: PySpark Version Compatibility: Ensure that your PySpark version is compatible with the XGBoost vers...

  • 0 kudos
mh_db
by New Contributor II
  • 232 Views
  • 5 replies
  • 1 kudos

Job parameters to get date and time

I'm trying to set up a workflow in databricks and I need my job parameter to get the date and time. I see in the documentation there's some options for dynamic values.I'm trying to use this one: {{job.start_time.[argument]}}For the "argument" there, ...

  • 232 Views
  • 5 replies
  • 1 kudos
Latest Reply
brockb
New Contributor III
  • 1 kudos

Then please change the code to:```iso_datetime = dbutils.widgets.get("LoadID")```

  • 1 kudos
4 More Replies
Surajv
by New Contributor III
  • 356 Views
  • 2 replies
  • 0 kudos

Getting python version errors when using pyspark rdd using databricks connect

Hi community, When I use pyspark rdd related functions in my environment using databricks connect, I get below error: Databricks cluster version: 12.2. `RuntimeError: Python in worker has different version 3.9 than that in driver 3.10, PySpark cannot...

  • 356 Views
  • 2 replies
  • 0 kudos
Latest Reply
Surajv
New Contributor III
  • 0 kudos

Got it. As a side note, I tried above methods, but the error persisted, hence upon reading docs again, there was this statement: You must install Python 3 on your development machine, and the minor version of your client Python installation must be t...

  • 0 kudos
1 More Replies
sharpbetty
by New Contributor II
  • 1256 Views
  • 3 replies
  • 0 kudos

Workflows: Running dependent task despite earlier task fail

I have a scheduled task running in workflow.Task 1 computes some parameters then these are picked up by a dependent reporting task: Task 2.I want Task 2 to report "Failure" if Task 1 fails. Yet creating a dependency in workflows means that Task 2 wil...

Get Started Discussions
tasks
Workflows
  • 1256 Views
  • 3 replies
  • 0 kudos
Latest Reply
NerdSan
New Contributor
  • 0 kudos

Hi @sharpbetty , Any suggestions how I can keep the parameter sharing and dependency from Task 1 to Task 2, yet also allow Task 2 to fire even on failure of Task 1?Setup:Task 2 dependent on Task1 Challenge: To Fire Task 2 even on Task 1 FailureSoluti...

  • 0 kudos
2 More Replies
faithlawrence98
by New Contributor II
  • 274 Views
  • 2 replies
  • 2 kudos

Why I am getting QB Desktop Error 6000 recurringly?

Whenever I try to open my company file over a network or multi-user mode, I keep getting QB Desktop Error 6000 and something after that. The error messages on my screen vary every time I attempt to access the data file. I cannot understand the error,...

  • 274 Views
  • 2 replies
  • 2 kudos
Latest Reply
larsonkristen06
New Contributor
  • 2 kudos

Hi, @faithlawrence98  and  @judithphillips5  I appreciate you both taking the time to share your expertise. This is a well-written and insightful post. Keep up the great work!Thanks Regards!Larson Kristen

  • 2 kudos
1 More Replies
mano7438
by New Contributor III
  • 20694 Views
  • 4 replies
  • 1 kudos

How to create temporary table in databricks

Hi Team,I have a requirement where I need to create temporary table not temporary view.Can you tell me how to create temporary table in data bricks ?

  • 20694 Views
  • 4 replies
  • 1 kudos
Latest Reply
NandiniN
Valued Contributor II
  • 1 kudos

I just learnt, the above is a LEGACY support and hence must not be used. This isn't supported syntax, so there would be a lot of restrictions on the usage of this. Internally it is just a view and hence we should go for create temp view instead.  I k...

  • 1 kudos
3 More Replies
DavidP32
by New Contributor
  • 103 Views
  • 2 replies
  • 0 kudos

DLT Pipeline problem - Unable to read dataset. The dataset is not defined within the pipeline.

Context:I've developed a DLT (Data Lifecycle Tool) pipeline where I create several temporary tables. Initially, when I ran these tables individually in separate notebooks, they functioned correctly within the DLT framework.However, after merging the ...

  • 103 Views
  • 2 replies
  • 0 kudos
Latest Reply
jose_gonzalez
Moderator
  • 0 kudos

would you be able to share more details about your code? also the full error stack trace.

  • 0 kudos
1 More Replies
Prashanthkumar
by New Contributor III
  • 392 Views
  • 2 replies
  • 1 kudos

Databricks Users Access Control via Azure AAD?

Hi All,Looking for suggestions to see if it is possible to control users via Azure AD (outside of Azure Databricks). As i want to create a new users in Azure and then I want to give RBAC to individual users and rather than control their permissions f...

  • 392 Views
  • 2 replies
  • 1 kudos
Latest Reply
Prashanthkumar
New Contributor III
  • 1 kudos

Thank you Kaniz, let me try some of the options as my Databricks is integrated with AAD. Let me try Option 1 as thats my primary requirement.

  • 1 kudos
1 More Replies
jvk
by New Contributor II
  • 192 Views
  • 1 replies
  • 0 kudos

"AWS S3 resource has been disabled" error on job, not appearing on notebook

I am getting an "INTERNAL_ERROR" on a databricks job submitted through the API. Which says:"Run result unavailable: run failed with error message All access to AWS S3 resource has been disabled"However, when I click on the notebook created by the job...

  • 192 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @jvk, The “INTERNAL_ERROR” you’re encountering in your Databricks job, along with the message “Run result unavailable: run failed with error message All access to AWS S3 resource has been disabled,” indicates that there’s an issue related to acces...

  • 0 kudos
Prashanthkumar
by New Contributor III
  • 2472 Views
  • 6 replies
  • 0 kudos

Is it possible to view Databricks cluster metrics using REST API

I am looking for some help on getting databricks cluster metrics such as memory utilization, CPU utilization, memory swap utilization, free file system using REST API.I am trying it in postman using databricks token and with my Service Principal bear...

Prashanthkumar_0-1705104529507.png
  • 2472 Views
  • 6 replies
  • 0 kudos
Latest Reply
Prashanthkumar
New Contributor III
  • 0 kudos

OK thank you, any plans to introduce a new feature in Databricks to capture CPU usage?

  • 0 kudos
5 More Replies
Labels
Top Kudoed Authors