cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Lizzz
by New Contributor II
  • 2851 Views
  • 2 replies
  • 3 kudos

Resolved! Forward Spark structured streaming metrics to Datadog

We have a spark streaming application written in Pyspark that we'd like to monitor with Datadog. By default, datadog collects a couple of streaming metrics like 'spark.structured_streaming.processing_rate' and 'spark.structured_streaming.latency'. Ho...

  • 2851 Views
  • 2 replies
  • 3 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 3 kudos

Hi @Liz Zhang​ , We haven't heard from you on the last response from @Shanmugavel Chandrakasu​​, and I was checking back to see if his suggestions helped you. Or else, If you have any solution, please share it with the community as it can be helpful ...

  • 3 kudos
1 More Replies
SailajaB
by Valued Contributor III
  • 10740 Views
  • 10 replies
  • 10 kudos

Resolved! Is there a way to capture the notebook logs from ADF pipeline?

Hi,I would like to capture notebook custom log exceptions(python) from ADF pipeline based on the exceptions pipeline should got succeed or failed.Is there any mechanism to implement it. In my testing ADF pipeline is successful irrespective of the log...

  • 10740 Views
  • 10 replies
  • 10 kudos
Latest Reply
GurpreetSethi
New Contributor III
  • 10 kudos

Hi SailajaB,Try this out.Notebook, once executed successfully return a long JSON formatted output. We need to specify appropriate nodes to fetch the output. In below screenshot we can see that when notebook ran it returns empName & empCity as output....

  • 10 kudos
9 More Replies
Saurav
by New Contributor III
  • 4548 Views
  • 6 replies
  • 7 kudos

spark cluster monitoring and visibility

Hey. I'm working on a project where I'd like to be able to view and play around with the spark cluster metrics. I'd like to know what the utilization % and max values are for metrics like CPU, memory and network. I've tried using some open source sol...

  • 4548 Views
  • 6 replies
  • 7 kudos
Latest Reply
Saurav
New Contributor III
  • 7 kudos

Hey @Kaniz Fatma​, I Appreciate the suggestions and will be looking into them. Haven't gotten to it yet so I didn't want to mention whether they worked for me or not. Since I'm looking to avoid solutions like DataDog, I'll be checking out the Prometh...

  • 7 kudos
5 More Replies
User15787040559
by New Contributor III
  • 2448 Views
  • 2 replies
  • 0 kudos

MicrosoftTeams-image

ERROR Max retries exceeded with url: /api/2.0/jobs/runs/get?run_id= Failed to establish a new connectionThis error can happen when exceeding the rate limits for all REST API calls as documented here.In the image shown for example we're using the Jobs...

  • 2448 Views
  • 2 replies
  • 0 kudos
Latest Reply
User16764241763
Honored Contributor
  • 0 kudos

Hi @Carlos Morillo​  Are you facing this issue consistently or when you run a lot of jobs?We are internally tracking a similar issue. Could you please file a support request with Microsoft Support? Databricks and MSFT will collaborate and provide upd...

  • 0 kudos
1 More Replies
User15787040559
by New Contributor III
  • 975 Views
  • 1 replies
  • 1 kudos

How do you find out if the REST API calls are logged anywhere when you update an IP Access List?

In the example response at https://docs.databricks.com/security/network/ip-access-list.html{ "ip_access_list": { "list_id": "<list-id>", "label": "office", "ip_addresses": [ "1.1.1.1", "2.2.2.2/21" ], "address_co...

  • 975 Views
  • 1 replies
  • 1 kudos
Latest Reply
User16752239289
Valued Contributor
  • 1 kudos

The workspace audit logs should provide all workspace conf change logs. You can check service accountsManager and action name createWorkspaceConfiguration or updateWorkspaceConfiguration.

  • 1 kudos
craig_ng
by New Contributor III
  • 2491 Views
  • 2 replies
  • 0 kudos
  • 2491 Views
  • 2 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

You can monitor user access to data and other resources using Databricks Audit Logs.Diagnostic logging in Azure DatabricksConfigure audit logging in AWS Databricks

  • 0 kudos
1 More Replies
Labels