cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 
Data + AI Summit 2024 - Data Engineering & Streaming

Forum Posts

User16783853906
by Contributor III
  • 884 Views
  • 0 replies
  • 0 kudos

Verify auto-optimize from delta history

How can I verify if auto-optimize is activated from Delta history for the two scenarios below? Will the DESC history show the details in both the cases? 1). Auto-optimize set on the table properties2). Auto-optimize enabled in spark sessionP.S. - I'm...

  • 884 Views
  • 0 replies
  • 0 kudos
User16753724663
by Valued Contributor
  • 1341 Views
  • 1 replies
  • 0 kudos

Resolved! Unable to create a token while deploying the workspace using terraform

we have automated out deployment with python API's however we have been caught in a situation which we cannot yet solve.We are looking to collect a token during the first deployment within the environment. currently our API requires a token.Is there...

  • 1341 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16753724663
Valued Contributor
  • 0 kudos

We can use below API to create a token and use the username and passwordcurl -X POST -u "admin_email":"xxxx" https://host/api/2.0/token/create -d' { "lifetime_seconds": 100, "comment": "this is an example token" }'

  • 0 kudos
User16826992666
by Valued Contributor
  • 8391 Views
  • 1 replies
  • 1 kudos

Resolved! Can you import a Jupyter notebook to a Databricks workspace?

Also curious if you can export a notebook created in Databricks as a Jupyter notebook

  • 8391 Views
  • 1 replies
  • 1 kudos
Latest Reply
User16826992666
Valued Contributor
  • 1 kudos

Yes, the .ipynb format is a supported file type which can be imported to a Databricks workspace. Note that some special configurations may need to be adjusted to work in the Databricks environment. Additional accepted file formats which can be import...

  • 1 kudos
User16826992666
by Valued Contributor
  • 1700 Views
  • 1 replies
  • 0 kudos

Resolved! What should I be looking for when evaluating the performance of a Spark job?

Where do I start when starting performance tuning of my queries? Are there particular things I should be looking out for?

  • 1700 Views
  • 1 replies
  • 0 kudos
Latest Reply
Srikanth_Gupta_
Valued Contributor
  • 0 kudos

Few things on top of my mind.1) Check Spark UI and check which stage is taking more time.2) Check for data skewing3) Data skew can severely downgrade performance of queries, Spark SQL accepts skew hints in queries, also make sure to use proper join h...

  • 0 kudos
User16826992666
by Valued Contributor
  • 651 Views
  • 1 replies
  • 0 kudos

Does Databricks SQL support any kind of custom visuals?

Wondering if I can make any kind of custom visuals or are the ones that come built in the only options?

  • 651 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16826992666
Valued Contributor
  • 0 kudos

At this time the only available visuals are the ones that are included in the Databricks SQL environment. There is no way to import or create custom visuals.

  • 0 kudos
User16826992666
by Valued Contributor
  • 739 Views
  • 1 replies
  • 0 kudos
  • 739 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16826992666
Valued Contributor
  • 0 kudos

No you do not. Although Delta is the default file format when writing data using Databricks, any file type supported by spark can be used when writing data.

  • 0 kudos
User16826992666
by Valued Contributor
  • 2167 Views
  • 1 replies
  • 0 kudos

What happens if a spot instance worker is lost in the middle of a query?

Does the query have to be re-run from the start, or can it continue? Trying to evaluate what risk there is by using spot instances for production jobs

  • 2167 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16826992666
Valued Contributor
  • 0 kudos

If a spot instance is reclaimed in the middle of a job, then spark will treat it as a lost worker. The spark engine will automatically retry the tasks from the lost worker on other available workers. So the query does not have to start over if indivi...

  • 0 kudos
User16826992666
by Valued Contributor
  • 809 Views
  • 1 replies
  • 0 kudos
  • 809 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16826992666
Valued Contributor
  • 0 kudos

No, the HTML is a point-in-time snapshot of the notebook from when you perform the export. Visuals and data results do not update in the HTML when updates are made on the notebook still in the workspace.

  • 0 kudos
User16826992666
by Valued Contributor
  • 680 Views
  • 1 replies
  • 0 kudos

Which MLlib library am I supposed to use - pyspark.mllib or pyspark.ml?

Both of these libraries seem to be available and they are both for MLlib, how do I know which one to use?

  • 680 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16826992666
Valued Contributor
  • 0 kudos

The pyspark.mllib library is built for RDD's, and the pyspark.ml library is built for Dataframes. The RDD-based mllib library is currently in maintenance mode, while the Dataframe library will continue to receive updates and active development. For t...

  • 0 kudos
User16826992666
by Valued Contributor
  • 2047 Views
  • 1 replies
  • 0 kudos

Can I prevent users from downloading data from a notebook?

By default any user can download a copy of the data they query in a notebook. Is it possible to prevent this?

  • 2047 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16826992666
Valued Contributor
  • 0 kudos

You can limit the ways that users can save copies of the data they have access to in a notebook, but not prevent it entirely. The download button which exists for cells in Databricks notebooks can be disabled in the "Workspace Settings" section of th...

  • 0 kudos
User16826994223
by Honored Contributor III
  • 1804 Views
  • 1 replies
  • 0 kudos

How to get the files with a prefix in Pyspark from s3 bucket?

I have different files in my s3. Now I want to get the files which starts with cop_

  • 1804 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16826994223
Honored Contributor III
  • 0 kudos

You are referencing a FileInfo object when calling .startswith()and not a string.The filename is a property of the FileInfo object, so this should work filename.name.startswith('cop_ ') should work.

  • 0 kudos
User16826994223
by Honored Contributor III
  • 1017 Views
  • 1 replies
  • 2 kudos

Where do SQL endpoints run?

Where do Databricks SQL endpoints run?

  • 1017 Views
  • 1 replies
  • 2 kudos
Latest Reply
User16826994223
Honored Contributor III
  • 2 kudos

Like Databricks clusters, SQL endpoints are created and managed in your Cloud Account (like GCP,AZURE,cloud). SQL endpoints manage SQL-optimized clusters automatically in your account and scale to match end-user demand.

  • 2 kudos
User16826994223
by Honored Contributor III
  • 5670 Views
  • 1 replies
  • 0 kudos

What does it mean that Delta Lake supports multi-cluster writes

What does it mean that Delta Lake supports multi-cluster writes ,Please explain , Ca we write same delta table with Multiple cluster

  • 5670 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16826994223
Honored Contributor III
  • 0 kudos

It means that Delta Lake does locking to make sure that queries writing to a table from multiple clusters at the same time won’t corrupt the table. However, it does not mean that if there is a write conflict (for example, update and delete the same t...

  • 0 kudos

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels