cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 
Data + AI Summit 2024 - Data Engineering & Streaming

Forum Posts

Maverick1
by Valued Contributor II
  • 3603 Views
  • 10 replies
  • 9 kudos

Resolved! Lineage between model and source code breaks on movement of source notebook. How to rectify it?

If there is a registered model and it is linked with a notebook, then the lineage breaks if you move the notebook to a different path or even pull/upload a new version of the notebook.This is not good because when someone doing its development/testin...

  • 3603 Views
  • 10 replies
  • 9 kudos
Latest Reply
sean_owen
Honored Contributor II
  • 9 kudos

I also cannot reproduce this, with these exact steps (I think). After moving the notebook and moving it back, the link to it (and link to the revision) still works as expected. You are using MLflow built in to Databricks right?

  • 9 kudos
9 More Replies
RantoB
by Valued Contributor
  • 6576 Views
  • 3 replies
  • 3 kudos

Resolved! %run file not found

Hi,I was using the following command to import variables and functions from an other notebook :%run ./utilsFor some reason it is not working any more and gives me this message :Exception: File `'./utils.py'` not found.utils.py is still at the same pl...

  • 6576 Views
  • 3 replies
  • 3 kudos
Latest Reply
RantoB
Valued Contributor
  • 3 kudos

Finally I just solved my issue.Actually, in the same cell I wrote a comment starting with # and it was not working because of that...

  • 3 kudos
2 More Replies
Mohit_m
by Valued Contributor II
  • 1057 Views
  • 2 replies
  • 4 kudos

Enabling of Task Orchestration feature in Jobs via API as well Databricks supports the ability to orchestrate multiple tasks within a job. You must en...

Enabling of Task Orchestration feature in Jobs via API as wellDatabricks supports the ability to orchestrate multiple tasks within a job. You must enable this feature in the admin console. Once enabled, this feature cannot be disabled. To enable orch...

  • 1057 Views
  • 2 replies
  • 4 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 4 kudos

Thank you @Mohit Miglani​  for this amazing post.

  • 4 kudos
1 More Replies
FemiAnthony
by New Contributor III
  • 3464 Views
  • 5 replies
  • 3 kudos

Resolved! Location of customer_t1 dataset

Can anyone tell me how I can access the customer_t1 dataset that is referenced in the book "Delta Lake - The Definitive Guide "? I am trying to follow along with one of the examples.

  • 3464 Views
  • 5 replies
  • 3 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 3 kudos

Some files are visualized here https://github.com/vinijaiswal/delta_time_travel/blob/main/Delta%20Time%20Travel.ipynb but it is quite strange that there is no source in repository. I think only one way is to write to Vini Jaiswal on github.

  • 3 kudos
4 More Replies
Sandesh87
by New Contributor III
  • 2742 Views
  • 2 replies
  • 2 kudos

Resolved! dbutils.secrets.get- NoSuchElementException: None.get

The below code executes a 'get' api method to retrieve objects from s3 and write to the data lake.The problem arises when I use dbutils.secrets.get to get the keys required to establish the connection to s3my_dataframe.rdd.foreachPartition(partition ...

  • 2742 Views
  • 2 replies
  • 2 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 2 kudos

Hi @Sandesh Puligundla​ , You just need to move the following two lines:val AccessKey = dbutils.secrets.get(scope = "ADB_Scope", key = "AccessKey-ID") val SecretKey = dbutils.secrets.get(scope = "ADB_Scope", key = "AccessKey-Secret")Outside of the fo...

  • 2 kudos
1 More Replies
Mohit_m
by Valued Contributor II
  • 697 Views
  • 1 replies
  • 2 kudos

docs.databricks.com

Are EBS volumes used by Databricks Clusters are encrypted especially the root volumes

  • 697 Views
  • 1 replies
  • 2 kudos
Latest Reply
Mohit_m
Valued Contributor II
  • 2 kudos

Yes these EBS volumes are encrypted. Earlier root volume encryptions were not supported but recently this encryption is also enabled (since Apr, 2021)please find more details on the below docs pagehttps://docs.databricks.com/clusters/configure.html#e...

  • 2 kudos
FemiAnthony
by New Contributor III
  • 4614 Views
  • 6 replies
  • 5 kudos

Resolved! /dbfs is empty

Why does /dbfs seem to be empty in my Databricks cluster ?If I run %sh ls /dbfsI get no output.I am looking for the databricks-datasets subdirectory ? I can't find it under /dbfs

  • 4614 Views
  • 6 replies
  • 5 kudos
Latest Reply
FemiAnthony
New Contributor III
  • 5 kudos

Thanks @Prabakar Ammeappin​ 

  • 5 kudos
5 More Replies
Sandesh87
by New Contributor III
  • 1658 Views
  • 3 replies
  • 2 kudos

Resolved! log error to cosmos db

Objective:- Retrieve objects from an S3 bucket using a 'get' api call, write the retrieved object to azure datalake and in case of errors like 404s (object not found) write the error message to cosmos DB"my_dataframe" consists of the a column (s3Obje...

  • 1658 Views
  • 3 replies
  • 2 kudos
Latest Reply
User16763506477
Contributor III
  • 2 kudos

Hi @Sandesh Puligundla​  issue is that you are using spark context inside foreachpartition. You can create a dataframe only on the spark driver. Few stack overflow references https://stackoverflow.com/questions/46964250/nullpointerexception-creatin...

  • 2 kudos
2 More Replies
SEOCO
by New Contributor II
  • 2483 Views
  • 3 replies
  • 3 kudos

Passing parameters from DevOps Pipeline/release to DataBricks Notebook

Hi,This is all a bit new to me.Does anybody have any idea how to pass a parameter to the Databricks notebook.I have a DevOps pipeline/release that moves my databricks notebooks towards QA and Production environment. The only problem I am facing is th...

  • 2483 Views
  • 3 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

@Mario Walle​ - If @Hubert Dudek​'s answer solved the issue, would you be happy to mark his answer as best so that it will be more visible to other members?

  • 3 kudos
2 More Replies
Jeff_Luecht
by New Contributor II
  • 2961 Views
  • 2 replies
  • 4 kudos

Resolved! Resarting existing community edition clusters

I am new to Databricks community edition. I was following the quckstart guide and running through basic cluster management - create, start, etc. For whatever reason, I cannot restart an e3xisting cluster. There is nothing in the cluster event logs or...

  • 2961 Views
  • 2 replies
  • 4 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 4 kudos

Hi @ Jeff Luecht,Please refresh the event logs. You can clone your cluster.As a Community Edition user, your cluster will automatically terminate after an idle period of two hours.For more configuration options, please upgrade your Databricks subscri...

  • 4 kudos
1 More Replies
Erik
by Valued Contributor II
  • 2715 Views
  • 6 replies
  • 2 kudos

Resolved! Does Z-ordering speed up reading of a single file?

Situation: we have one partion per date, and it just so happens that each partition ends up (after optimize) as *a single* 128mb file. We partition on date, and zorder on userid, and our query is something like "find max value of column A where useri...

  • 2715 Views
  • 6 replies
  • 2 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 2 kudos

Z-Order will make sure that in case you need to read multiple files, these files are co-located.For a single file this does not matter as a single file is always local to itself.If you are certain that your spark program will only read a single file,...

  • 2 kudos
5 More Replies
Alexander1
by New Contributor III
  • 2413 Views
  • 5 replies
  • 0 kudos

Databricks JDBC 2.6.19 documentation

I am searching for the Databricks JDBC 2.6.19 documentation page. I can find release notes from the Databricks download page (https://databricks-bi-artifacts.s3.us-east-2.amazonaws.com/simbaspark-drivers/jdbc/2.6.19/docs/release-notes.txt) but on Mag...

  • 2413 Views
  • 5 replies
  • 0 kudos
Latest Reply
Alexander1
New Contributor III
  • 0 kudos

By the way what is still wild, is that the Simba docs say 2.6.16 does only support until Spark 2.4 while the release notes on Databricks download page say 2.6.16 already supports Spark 3.0. Strange that we get contradicting info from the actual driv...

  • 0 kudos
4 More Replies
Anonymous
by Not applicable
  • 456 Views
  • 0 replies
  • 0 kudos

spacecoastdaily.com

This Vigor Now male improvement pill contains still up in the air trimmings that together work on working on your overall prosperity by boosting the levels and production of testosterone in your body. Such extended testosterone creation can certainly...

  • 456 Views
  • 0 replies
  • 0 kudos
Daniel
by New Contributor III
  • 6930 Views
  • 11 replies
  • 6 kudos

Resolved! Autocomplete parentheses, quotation marks, brackets and square stopped working

Hello guys, can someone help me?Autocomplete parentheses, quotation marks, brackets and square stopped working in python notebooks.How can I fix this?Daniel

  • 6930 Views
  • 11 replies
  • 6 kudos
Latest Reply
Daniel
New Contributor III
  • 6 kudos

@Piper Wilson​ , @Werner Stinckens​ Thank you so much for your help.I made the suggestion of the @Jose Gonzalez​ and now it works.

  • 6 kudos
10 More Replies
Hubert-Dudek
by Esteemed Contributor III
  • 9220 Views
  • 5 replies
  • 17 kudos

Resolved! Optimize and Vacuum - which is the best order of operations?

Optimize -> VacuumorVacuum -> Optimize

  • 9220 Views
  • 5 replies
  • 17 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 17 kudos

I optimize first as delta lake knows which files are relevant for the optimize. Like that I have my optimized data available faster. Then a vacuum. Seemed logical to me, but I might be wrong. Never actually thought about it

  • 17 kudos
4 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels