cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

mbdata
by New Contributor II
  • 37836 Views
  • 6 replies
  • 6 kudos

Resolved! Toggle line comment

I work with Azure Databricks. The shortcut Ctrl + / to toggle line comment doesn't work on AZERTY keyboard on Firefox... Do you know this issue ? Is there an other shortcut I can try ? Thanks !

  • 37836 Views
  • 6 replies
  • 6 kudos
Latest Reply
Flo
New Contributor III
  • 6 kudos

'cmd + shift + 7' works for me!I'm using an AZERTY keyboard on Chrome for MacOS.

  • 6 kudos
5 More Replies
JordanYaker
by Contributor
  • 1404 Views
  • 0 replies
  • 0 kudos

Integration options for Databricks Jobs and DataDog?

I know that there is already the Databricks (technically Spark) integration for DataDog. Unfortunately, that integration only covers the cluster execution itself and that means only Cluster Metrics and Spark Jobs and Tasks. I'm looking for somethin...

  • 1404 Views
  • 0 replies
  • 0 kudos
Direo
by Contributor
  • 2113 Views
  • 1 replies
  • 1 kudos

Azure databricks integration with Datadog

Before running a script which would create an agent on a cluster, you have to provide SPARK_LOCAL_IP variable. How can I find it? Does it change over time or its a constant?

  • 2113 Views
  • 1 replies
  • 1 kudos
Latest Reply
Debayan
Databricks Employee
  • 1 kudos

Hi, Could you please refer to https://www.datadoghq.com/blog/databricks-monitoring-datadog/ and let us know if this helps. SPARK_LOCAL_IP is the environment variable, FYI, https://spark.apache.org/docs/latest/configuration.html

  • 1 kudos
julie
by New Contributor III
  • 3989 Views
  • 5 replies
  • 3 kudos

Resolved! Scope creation in Databricks or Confluent?

Hello I am a newbie in this field and trying to access confluent kafka stream in Databricks Azure based on a beginner's video by Databricks. I have a free trial of Databricks cluster right now. When I run the below notebook, it errors out on line 5 o...

image
  • 3989 Views
  • 5 replies
  • 3 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 3 kudos

For testing, create without secret scope. It will be unsafe, but you can post secrets as strings in the notebook for testing. Here is the code which I used for loading data from confluent:inputDF = (spark .readStream .format("kafka") .option("kafka.b...

  • 3 kudos
4 More Replies
Lizzz
by New Contributor II
  • 3391 Views
  • 1 replies
  • 3 kudos

Resolved! Forward Spark structured streaming metrics to Datadog

We have a spark streaming application written in Pyspark that we'd like to monitor with Datadog. By default, datadog collects a couple of streaming metrics like 'spark.structured_streaming.processing_rate' and 'spark.structured_streaming.latency'. Ho...

  • 3391 Views
  • 1 replies
  • 3 kudos
Latest Reply
shan_chandra
Databricks Employee
  • 3 kudos

@Liz Zhang​ , Please refer to the below documentation contain pyspark implementation of streamingQueryListener https://www.databricks.com/blog/2022/05/27/how-to-monitor-streaming-queries-in-pyspark.html

  • 3 kudos
User16826994223
by Honored Contributor III
  • 4226 Views
  • 1 replies
  • 0 kudos

How to export full result Databricks Azure

what is the best way to see all the data , I see display shows up to 100000 data only . any way in which I can see all the data or do I need to download or export it in different file

  • 4226 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16826994223
Honored Contributor III
  • 0 kudos

Yes, databricks display only a limited dataframe. It allows you to download the data like a csv, . You can save the dataframe as a table in the databricks database with this:predictions.select("salry", "dept").write.saveAsTable("depsalry")Then you ca...

  • 0 kudos
Labels