cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 
Data + AI Summit 2024 - Data Engineering & Streaming

Forum Posts

User16826994223
by Honored Contributor III
  • 838 Views
  • 1 replies
  • 1 kudos

Does Databricks have a data processing agreement?

Does Databricks have a data processing agreement?

  • 838 Views
  • 1 replies
  • 1 kudos
Latest Reply
User16826994223
Honored Contributor III
  • 1 kudos

Databricks offers a standalone data processing agreement to comply with certain data protection laws that contains our contractual commitments with respect to applicable data protection and privacy law. If your company determines that you require ter...

  • 1 kudos
User16826994223
by Honored Contributor III
  • 3472 Views
  • 1 replies
  • 0 kudos

Do login sessions into Databricks have an idle timeout?

Do login sessions into Databricks have an idle timeout?

  • 3472 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16826994223
Honored Contributor III
  • 0 kudos

Short Answer:YesDetailed Answer:User sessions automatically timeout after six hours of idle time. This timeout is not configurable. User sessions are terminated if the user is removed from the workspace. To trigger session end for users who were remo...

  • 0 kudos
Anonymous
by Not applicable
  • 895 Views
  • 1 replies
  • 0 kudos
  • 895 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16826994223
Honored Contributor III
  • 0 kudos

for any other non-private previews, they can check out admin console --> advanced tab. there are tons of toggles there to enable/disable features. if it’s not there, there usually isn’t an easy (or direct) way of disabling

  • 0 kudos
Anonymous
by Not applicable
  • 794 Views
  • 1 replies
  • 2 kudos
  • 794 Views
  • 1 replies
  • 2 kudos
Latest Reply
User16826994223
Honored Contributor III
  • 2 kudos

Scala Use JVM to run its code, Scala cannot run different applications at a time with complete isolation of each task inside single jvm , that is the reason Scala doesn't support high concurrency cluster, I don't think it is on road map

  • 2 kudos
MoJaMa
by Valued Contributor II
  • 874 Views
  • 1 replies
  • 0 kudos
  • 874 Views
  • 1 replies
  • 0 kudos
Latest Reply
MoJaMa
Valued Contributor II
  • 0 kudos

That’s only available at Premium and Enterprise SKUs in AWS.See the "Enterprise Security" section here:https://databricks.com/product/aws-pricing

  • 0 kudos
User16783853501
by New Contributor II
  • 1004 Views
  • 1 replies
  • 0 kudos

What types of files does autoloader support for streaming ingestion ? I see good support for CSV and JSON, how can I ingest files like XML, avro, parquet etc ? would XML rely on Spark-XML ?

What types of files does autoloader support for streaming ingestion ? I see good support for CSV and JSON, how can I ingest files like XML, avro, parquet etc ? would XML rely on Spark-XML ? 

  • 1004 Views
  • 1 replies
  • 0 kudos
Latest Reply
sajith_appukutt
Honored Contributor II
  • 0 kudos

Please raise a feature request via ideas portal for XML support in autoloader As a workaround, you could look at reading this with wholeTextFiles (which loads the data into a PairRDD with one record per input file) and parsing it with from_xml from ...

  • 0 kudos
User16790091296
by Contributor II
  • 1380 Views
  • 1 replies
  • 1 kudos

Using Databricks Connect (DBConnect)

I'd like to edit Databricks notebooks locally using my favorite editor, and then use Databricks Connect to run the notebook remotely on a Databricks cluster that I usually access via the web interface.I run "databricks-connect configure" , as suggest...

  • 1380 Views
  • 1 replies
  • 1 kudos
Latest Reply
sajith_appukutt
Honored Contributor II
  • 1 kudos

Here is the link to the configuration properties https://docs.databricks.com/dev-tools/databricks-connect.html#step-2-configure-connection-properties

  • 1 kudos
User16790091296
by Contributor II
  • 7872 Views
  • 1 replies
  • 0 kudos

Azure Databricks: How to add Spark configuration in Databricks cluster?

I am using a Spark Databricks cluster and want to add a customized Spark configuration.There is a Databricks documentation on this but I am not getting any clue how and what changes I should make. Can someone pls share the example to configure the Da...

  • 7872 Views
  • 1 replies
  • 0 kudos
Latest Reply
brickster_2018
Esteemed Contributor
  • 0 kudos

You can set the configurations on the Databricks cluster UIhttps://docs.databricks.com/clusters/configure.html#spark-configurationTo see the default configuration, run the below code in a notebook:%sql set;

  • 0 kudos
User16790091296
by Contributor II
  • 7774 Views
  • 1 replies
  • 0 kudos

How to List of Notebooks in a Workspace - Databricks?

I want to list down the Notebooks in a folder in Databricks. I tried to use the utilities like , dbutils.fs.ls("/path") - > It shows the path of the storage folder.I also tried to check dbutil.notebook.help() - nothing useful.Lets say, there is a fol...

  • 7774 Views
  • 1 replies
  • 0 kudos
Latest Reply
brickster_2018
Esteemed Contributor
  • 0 kudos

Notebooks are not stored in DBFS. They cannot be directly listed from the file system. You should use the Databricks REST API to list and get the detailshttps://docs.databricks.com/dev-tools/api/latest/workspace.html#list

  • 0 kudos
User16826992666
by Valued Contributor
  • 1428 Views
  • 1 replies
  • 0 kudos
  • 1428 Views
  • 1 replies
  • 0 kudos
Latest Reply
brickster_2018
Esteemed Contributor
  • 0 kudos

To time travel to a particular version, it's necessary to have the JSON file for that particular version. the JSON files in the delta_log have default retention of 30 days. So by default, we can time travel only up to 30 days. The retention of the D...

  • 0 kudos

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels