cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Ajay-Pandey
by Esteemed Contributor III
  • 5080 Views
  • 9 replies
  • 11 kudos

Databricks start support to run selected text in a cell this will help us a lot during debugging of the code.In windows just select the line of code w...

Databricks start support to run selected text in a cell this will help us a lot during debugging of the code.In windows just select the line of code which you want to execute and press Ctrl+Shift+Enter

sele
  • 5080 Views
  • 9 replies
  • 11 kudos
Latest Reply
Nhan_Nguyen
Valued Contributor
  • 11 kudos

Thanks @Ajay Pandey​ nice sharing

  • 11 kudos
8 More Replies
SIRIGIRI
by Contributor
  • 747 Views
  • 0 replies
  • 1 kudos

medium.com

During Shuffle operation Data is moving from memory to disk Why?Please find the detailed answer here if any question please comment and hit like and share if interested in upcoming articles.https://medium.com/@sharikrishna26/during-shuffle-operation-...

  • 747 Views
  • 0 replies
  • 1 kudos
Ancil
by Contributor II
  • 2233 Views
  • 1 replies
  • 1 kudos

PythonException: 'RuntimeError: The length of output in Scalar iterator pandas UDF should be the same with the input's; however, the length of output was 1 and the length of input was 2.'.

I have pandas_udf, its working for 4 rows, but I tried with more than 4 rows getting below error.PythonException: 'RuntimeError: The length of output in Scalar iterator pandas UDF should be the same with the input's; however, the length of output was...

  • 2233 Views
  • 1 replies
  • 1 kudos
Latest Reply
Ancil
Contributor II
  • 1 kudos

@Kaniz Fatma​  Can you please help me on pandas_udf ?Above scenario I have used regular expressions, for that we have our spark method, but I have other pandas_udf have same issue.

  • 1 kudos
tatekeller
by New Contributor
  • 3438 Views
  • 1 replies
  • 0 kudos

Can you access a repo file in an init script?

I'd like to configure a cluster with python libraries as defined in a requirements file. I have a pip requirements.txt file in a private repo which I have integrated on Databricks (and I can access it through the UI and view it on Databricks). I upda...

  • 3438 Views
  • 1 replies
  • 0 kudos
Latest Reply
sher
Valued Contributor II
  • 0 kudos

you can install in a cluster

  • 0 kudos
KVNARK
by Honored Contributor II
  • 1295 Views
  • 1 replies
  • 5 kudos

accessing secret from spark cluster.

passing spark configuration to access blob, adls from data factory while creating job clusterit's working fine, but when in the property we are accessing secret it's not workingspark.hadoop.fs.azure.account.auth.type.{{secrets/scope/key}}.dfs.core.wi...

  • 1295 Views
  • 1 replies
  • 5 kudos
Latest Reply
sher
Valued Contributor II
  • 5 kudos

check here : https://docs.databricks.com/security/secrets/secrets.html

  • 5 kudos
sonali1996
by New Contributor
  • 1863 Views
  • 2 replies
  • 0 kudos

adding Widget as a column and populating its value every-time in that column in a table.

hi , I want date for runtime from ADF as @utcnow() -- base paramater of notebook activity in ADF and take the data in ADB using widgets as runtime_date, further i want that column to be added in my table X with the populated value from the widget.Eve...

  • 1863 Views
  • 2 replies
  • 0 kudos
Latest Reply
sher
Valued Contributor II
  • 0 kudos

you can use as current_timestamp() or now()refer link: https://docs.databricks.com/sql/language-manual/functions/current_timestamp.html

  • 0 kudos
1 More Replies
Ajay-Pandey
by Esteemed Contributor III
  • 30113 Views
  • 6 replies
  • 7 kudos

Resolved! What does "Determining location of DBIO file fragments..." mean, and how do I speed it up?

Determining location of DBIO file fragments. This operation can take some time.What does this mean, and how do I prevent it from having to perform this apparently-expensive operation every time? This happens even when all the underlying tables are De...

  • 30113 Views
  • 6 replies
  • 7 kudos
Latest Reply
Christianben9
New Contributor II
  • 7 kudos

Determining location of DBIO file fragments" is a message that may be displayed during the boot process of a computer running the NetApp Data ONTAP operating system. This message indicates that the system is currently in the process of identifying an...

  • 7 kudos
5 More Replies
cgrant
by Databricks Employee
  • 4955 Views
  • 4 replies
  • 6 kudos

How do I know how much of a query/job used Photon?

I'm trying to use the native execution engine, Photon. How can I tell if a query is using Photon or is falling back to the non-native Spark engine?

  • 4955 Views
  • 4 replies
  • 6 kudos
Latest Reply
venkat09
New Contributor III
  • 6 kudos

Typo error in my second point of the previous post. Click the execution plan of your task[this is available under SQL/Dataframe tab in Spark UI]. It explains what operations run in the photon engine and what didn't execute by photon.

  • 6 kudos
3 More Replies
patdev
by New Contributor III
  • 8614 Views
  • 9 replies
  • 2 kudos

text datatype not supported and data having huge data in text filed how to bring it over

Hello all,I have medical field data file and one of the field is the text field with huge data not the big problem is databrick does not support text data type so how can i bring the data over. i tried conversion, cast in various way but so far not ...

  • 8614 Views
  • 9 replies
  • 2 kudos
Latest Reply
patdev
New Contributor III
  • 2 kudos

Setting escapeQuotes to false has helped to bring huge text data in colomn.thanks

  • 2 kudos
8 More Replies
Gaurav_784295
by New Contributor III
  • 3149 Views
  • 2 replies
  • 0 kudos

pyspark.sql.utils.AnalysisException: Non-time-based windows are not supported on streaming DataFrames/Datasets

pyspark.sql.utils.AnalysisException: Non-time-based windows are not supported on streaming DataFrames/DatasetsGetting this error while writing can any one please tell how we can resolve it

  • 3149 Views
  • 2 replies
  • 0 kudos
Latest Reply
Gaurav_784295
New Contributor III
  • 0 kudos

I'm trying to run query on some table and then storing that result in some table .query = stream .writeStream .format("delta") .foreachBatch(batch_function) \ .option('checkpointLocation', self.checkpoint_loc) .trigger(processingTime...

  • 0 kudos
1 More Replies
ty2
by New Contributor II
  • 3221 Views
  • 3 replies
  • 1 kudos

Resolved! How to start my cluster

​I try to stop my_cluster from compute from admin role. BTW, using same account, I could not restart my_cluster. The information is as followings. How should I do?

20230121-my_cluster_not_start
  • 3221 Views
  • 3 replies
  • 1 kudos
Latest Reply
Aviral-Bhardwaj
Esteemed Contributor III
  • 1 kudos

rit seems this is community edition so in CE this feature is disabled , delete this one and create new cluster

  • 1 kudos
2 More Replies
Sujitha
by Databricks Employee
  • 1157 Views
  • 1 replies
  • 2 kudos

Documentation Update January 13 - 19 Databricks documentation provides how-to guidance and reference information for data analysts, data scientists, a...

Documentation Update January 13 - 19Databricks documentation provides how-to guidance and reference information for data analysts, data scientists, and data engineers working in the Databricks Data Science & Engineering, Databricks Machine Learning, ...

  • 1157 Views
  • 1 replies
  • 2 kudos
Latest Reply
Aviral-Bhardwaj
Esteemed Contributor III
  • 2 kudos

thanks for the details

  • 2 kudos
vk217
by Contributor
  • 2361 Views
  • 1 replies
  • 0 kudos

Resolved! Import course material to databricks

I signed up for the data engineering course and downloaded the course material.However I cannot access the link to import the course material into databricks. Below link gives me access denied.https://www.databricks.training/step-by-step/importing-co...

  • 2361 Views
  • 1 replies
  • 0 kudos
Latest Reply
Aviral-Bhardwaj
Esteemed Contributor III
  • 0 kudos

https://github.com/databricks-academy/data-engineering-with-databricks-english use this link and download this to your local and then import, it will work

  • 0 kudos
Chris_Konsur
by New Contributor III
  • 10487 Views
  • 1 replies
  • 0 kudos

Resolved! configuring the Databricks JobAPIs and I get Error 403 User not authorized.

 I’m configuring the Databricks JobAPIs and I get Error 403 User not authorized.I found out the issue is that I need to apply a rule and set API permissions for AzureDatabricksAzure Portal>Azure Databricks>Azure Databricks Service>Access control (IAM...

  • 10487 Views
  • 1 replies
  • 0 kudos
Latest Reply
Aviral-Bhardwaj
Esteemed Contributor III
  • 0 kudos

for the particular jobs the user who is trying to start the job he should have access permission or run permission for that jobs , please give required permission and it will work for sure

  • 0 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels