cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

kushalnagrani
by New Contributor III
  • 454 Views
  • 3 replies
  • 0 kudos

Resolved! Getting error while running the MLOPS demo

Hello,I am trying to run this demo in databricks community edition but i am facing error.MLOPS DEMO - https://www.databricks.com/resources/demos/tutorials/data-science-and-ai/mlops-end-to-end-pipeline?itm_data=demo_centerSomeone else also faced the s...

kushalnagrani_0-1713782286430.png kushalnagrani_1-1713782333329.png
Get Started Discussions
dbdemos
mlflow
MLOPS
  • 454 Views
  • 3 replies
  • 0 kudos
Latest Reply
sidosq
Visitor
  • 0 kudos

Hey @kushalnagrani , thank you for sharing this. I am facing the same issue. Could you please help me where can I find these settings? I am new, just created a community workspace and I thought given I am the admin I will have all these by default - ...

  • 0 kudos
2 More Replies
WilliamMartine5
by New Contributor
  • 151 Views
  • 1 replies
  • 0 kudos

What are the various methods for extracting log data from Splunk into Databricks?

Hello,I've recently embarked on integrating Splunk with Databricks. My aim is to efficiently ingest data from Splunk into Databricks. While I've reviewed the available documentation on Splunk Integration, it primarily covers basic information. Howeve...

  • 151 Views
  • 1 replies
  • 0 kudos
Latest Reply
mhiltner
New Contributor II
  • 0 kudos

I'd highly recommend checking out Fivetran. Easy integration with Databricks, cost effective and they have recently launched a Splunk integration. https://fivetran.com/docs/connectors/applications/splunk You can set it up on the Data Ingestion sectio...

  • 0 kudos
mohaimen_syed
by New Contributor III
  • 123 Views
  • 2 replies
  • 0 kudos

VS Code integration with Python Notebook and Remote Cluster

Hi, I'm trying to work on VS code remotely on my machine instead of using the Databricks environment on my browser. I have went through documentation to set up the Databricks. extension and also setup Databricks Connect but don't feel like they work ...

  • 123 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @mohaimen_syed, It sounds like you’re trying to use Databricks Connect to run a Python notebook on a remote Azure Databricks cluster from your local machine. Let’s break down the steps to achieve this: Configure Azure Databricks Authentication...

  • 0 kudos
1 More Replies
rahuja
by New Contributor
  • 361 Views
  • 2 replies
  • 0 kudos

Py4JError: An error occurred while calling o992.resourceProfileManager

Hello I am trying to run the SparkXGBoostRegressor and I am getting the following error:SpoilerPy4JError: An error occurred while calling o992.resourceProfileManager. Trace: py4j.security.Py4JSecurityException: Method public org.apache.spark.resource...

  • 361 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @rahuja, The error you’re encountering might be related to the interaction between PySpark and XGBoost. Let’s explore some potential solutions: PySpark Version Compatibility: Ensure that your PySpark version is compatible with the XGBoost vers...

  • 0 kudos
1 More Replies
Surajv
by New Contributor III
  • 84 Views
  • 1 replies
  • 0 kudos

Getting databricks-connect com.fasterxml.jackson.databind.exc.MismatchedInputException parse warning

Hi community, I am getting below warning when I try using pyspark code for some of my use-cases using databricks-connect. Is this a critical warning, and any idea what does it mean?Logs: WARN DatabricksConnectConf: Could not parse /root/.databricks-c...

  • 84 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi, @Surajv, The warning you’re encountering is related to using Databricks Connect with PySpark.  Databricks Connect: Databricks Connect is a Python library that allows you to connect your local development environment to a Databricks cluster. I...

  • 0 kudos
rafal_walisko
by New Contributor
  • 98 Views
  • 1 replies
  • 0 kudos

Optimal Strategies for downloading large query results with Databricks API

Hi everyone,I'm currently facing an issue with handling a large amount of data using the Databricks API. Specifically, I have a query that returns a significant volume of data, sometimes resulting in over 200 chunks.My initial approach was to retriev...

  • 98 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @rafal_walisko, Handling large volumes of data using the Databricks API can indeed be challenging, especially when dealing with numerous chunks.   Let’s explore some strategies that might help you optimize your approach: Rate Limits and Paral...

  • 0 kudos
ymt
by New Contributor
  • 281 Views
  • 1 replies
  • 1 kudos

connection from databricks to snowflake using OKTA

Hi team,This is how I connect to Snowflake from Jupyter Notebook:import snowflake.connector snowflake_connection = snowflake.connector.connect( authenticator='externalbrowser', user='U1', account='company1.us-east-1', database='db1',...

  • 281 Views
  • 1 replies
  • 1 kudos
Latest Reply
Kaniz
Community Manager
  • 1 kudos

Hi @ymt, It seems you’ve encountered an issue while connecting to Snowflake from your Databricks Notebook. The error message you received is: ImportError: cannot import name 'NamedTuple' from 'typing_extensions' (/databricks/python/lib/python3.9/s...

  • 1 kudos
mh_db
by New Contributor II
  • 385 Views
  • 5 replies
  • 1 kudos

Job parameters to get date and time

I'm trying to set up a workflow in databricks and I need my job parameter to get the date and time. I see in the documentation there's some options for dynamic values.I'm trying to use this one: {{job.start_time.[argument]}}For the "argument" there, ...

  • 385 Views
  • 5 replies
  • 1 kudos
Latest Reply
brockb
New Contributor III
  • 1 kudos

Then please change the code to:```iso_datetime = dbutils.widgets.get("LoadID")```

  • 1 kudos
4 More Replies
Surajv
by New Contributor III
  • 381 Views
  • 2 replies
  • 0 kudos

Getting python version errors when using pyspark rdd using databricks connect

Hi community, When I use pyspark rdd related functions in my environment using databricks connect, I get below error: Databricks cluster version: 12.2. `RuntimeError: Python in worker has different version 3.9 than that in driver 3.10, PySpark cannot...

  • 381 Views
  • 2 replies
  • 0 kudos
Latest Reply
Surajv
New Contributor III
  • 0 kudos

Got it. As a side note, I tried above methods, but the error persisted, hence upon reading docs again, there was this statement: You must install Python 3 on your development machine, and the minor version of your client Python installation must be t...

  • 0 kudos
1 More Replies
sharpbetty
by New Contributor II
  • 1279 Views
  • 3 replies
  • 0 kudos

Workflows: Running dependent task despite earlier task fail

I have a scheduled task running in workflow.Task 1 computes some parameters then these are picked up by a dependent reporting task: Task 2.I want Task 2 to report "Failure" if Task 1 fails. Yet creating a dependency in workflows means that Task 2 wil...

Get Started Discussions
tasks
Workflows
  • 1279 Views
  • 3 replies
  • 0 kudos
Latest Reply
NerdSan
New Contributor
  • 0 kudos

Hi @sharpbetty , Any suggestions how I can keep the parameter sharing and dependency from Task 1 to Task 2, yet also allow Task 2 to fire even on failure of Task 1?Setup:Task 2 dependent on Task1 Challenge: To Fire Task 2 even on Task 1 FailureSoluti...

  • 0 kudos
2 More Replies
faithlawrence98
by New Contributor II
  • 285 Views
  • 2 replies
  • 2 kudos

Why I am getting QB Desktop Error 6000 recurringly?

Whenever I try to open my company file over a network or multi-user mode, I keep getting QB Desktop Error 6000 and something after that. The error messages on my screen vary every time I attempt to access the data file. I cannot understand the error,...

  • 285 Views
  • 2 replies
  • 2 kudos
Latest Reply
larsonkristen06
New Contributor
  • 2 kudos

Hi, @faithlawrence98  and  @judithphillips5  I appreciate you both taking the time to share your expertise. This is a well-written and insightful post. Keep up the great work!Thanks Regards!Larson Kristen

  • 2 kudos
1 More Replies
mano7438
by New Contributor III
  • 20919 Views
  • 4 replies
  • 1 kudos

How to create temporary table in databricks

Hi Team,I have a requirement where I need to create temporary table not temporary view.Can you tell me how to create temporary table in data bricks ?

  • 20919 Views
  • 4 replies
  • 1 kudos
Latest Reply
NandiniN
Valued Contributor II
  • 1 kudos

I just learnt, the above is a LEGACY support and hence must not be used. This isn't supported syntax, so there would be a lot of restrictions on the usage of this. Internally it is just a view and hence we should go for create temp view instead.  I k...

  • 1 kudos
3 More Replies
Prashanthkumar
by New Contributor III
  • 412 Views
  • 2 replies
  • 1 kudos

Databricks Users Access Control via Azure AAD?

Hi All,Looking for suggestions to see if it is possible to control users via Azure AD (outside of Azure Databricks). As i want to create a new users in Azure and then I want to give RBAC to individual users and rather than control their permissions f...

  • 412 Views
  • 2 replies
  • 1 kudos
Latest Reply
Prashanthkumar
New Contributor III
  • 1 kudos

Thank you Kaniz, let me try some of the options as my Databricks is integrated with AAD. Let me try Option 1 as thats my primary requirement.

  • 1 kudos
1 More Replies
jvk
by New Contributor II
  • 217 Views
  • 1 replies
  • 0 kudos

"AWS S3 resource has been disabled" error on job, not appearing on notebook

I am getting an "INTERNAL_ERROR" on a databricks job submitted through the API. Which says:"Run result unavailable: run failed with error message All access to AWS S3 resource has been disabled"However, when I click on the notebook created by the job...

  • 217 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @jvk, The “INTERNAL_ERROR” you’re encountering in your Databricks job, along with the message “Run result unavailable: run failed with error message All access to AWS S3 resource has been disabled,” indicates that there’s an issue related to acces...

  • 0 kudos
Prashanthkumar
by New Contributor III
  • 2517 Views
  • 6 replies
  • 0 kudos

Is it possible to view Databricks cluster metrics using REST API

I am looking for some help on getting databricks cluster metrics such as memory utilization, CPU utilization, memory swap utilization, free file system using REST API.I am trying it in postman using databricks token and with my Service Principal bear...

Prashanthkumar_0-1705104529507.png
  • 2517 Views
  • 6 replies
  • 0 kudos
Latest Reply
Prashanthkumar
New Contributor III
  • 0 kudos

OK thank you, any plans to introduce a new feature in Databricks to capture CPU usage?

  • 0 kudos
5 More Replies
Labels
Top Kudoed Authors