cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

138999
by New Contributor
  • 1238 Views
  • 1 replies
  • 0 kudos

How are parallel and subsequent jobs handled by cluster?

Hello,Apologize for dumb question but i'm new to Databricks and need clarification on following.Are parallel and subsequent jobs able to reuse the same compute resources to keep time and cost overhead as low as possible vs. are they spinning a new cl...

  • 1238 Views
  • 1 replies
  • 0 kudos
Latest Reply
daniel_sahal
Esteemed Contributor
  • 0 kudos

@tanja.savic tanja.savic​ You can use shared job cluster:https://docs.databricks.com/workflows/jobs/jobs.html#use-shared-job-clustersBut remember that a shared job cluster is scoped to a single job run, and cannot be used by other jobs or runs of the...

  • 0 kudos
Phani1
by Valued Contributor II
  • 1750 Views
  • 1 replies
  • 1 kudos

Resolved! Databricks - Calling dashboard another dashboard..

Hi Team ,Can we call the dashboard from another dashboard? An example screenshot is attached.Main Dashboard has 3 buttons that point to 3 different dashboards and if we click any of the buttons it has to redirect to the respective dashboard.

  • 1750 Views
  • 1 replies
  • 1 kudos
Latest Reply
daniel_sahal
Esteemed Contributor
  • 1 kudos

@Janga Reddy​ I don't think that this is possible at this moment.You can raise a feature request here: https://docs.databricks.com/resources/ideas.html

  • 1 kudos
Ancil
by Contributor II
  • 3859 Views
  • 3 replies
  • 1 kudos

Resolved! PythonException: 'RuntimeError: The length of output in Scalar iterator pandas UDF should be the same with the input's; however, the length of output was 1 and the length of input was 2.'.

I have pandas_udf, its working for 1 rows, but I tried with more than one rows getting below error.PythonException: 'RuntimeError: The length of output in Scalar iterator pandas UDF should be the same with the input's; however, the length of output w...

  • 3859 Views
  • 3 replies
  • 1 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 1 kudos

I was testing, and your function is correct. So you need to have an error in inputData type (is all string) or with result_json. Please also check the runtime version. I was using 11 LTS. 

  • 1 kudos
2 More Replies
Brave
by New Contributor II
  • 5893 Views
  • 5 replies
  • 3 kudos

Resolved! Exporting R data frame variable

Hi all.I am trying to export R data frame variable as csv file.I am using this formula:df<- data.frame(VALIDADOR_FIM)df.coalesce(1).write.format("com.databricks.spark.csv").option("header", "true").save("dbfs:/FileStore/df/df.csv")But isn´t working. ...

  • 5893 Views
  • 5 replies
  • 3 kudos
Latest Reply
sher
Valued Contributor II
  • 3 kudos

Please try to execute write.csv with the following path instead:write.csv(TotalData,file='/dbfs/tmp/df.csv',row.names = FALSE)%fs ls /tmp

  • 3 kudos
4 More Replies
Prem1
by New Contributor III
  • 21271 Views
  • 21 replies
  • 11 kudos

java.lang.IllegalArgumentException: java.net.URISyntaxException

I am using Databricks Autoloader to load JSON files from ADLS gen2 incrementally in directory listing mode. All source filename has Timestamp on them. The autoloader works perfectly couple of days with the below configuration and breaks the next day ...

  • 21271 Views
  • 21 replies
  • 11 kudos
Latest Reply
jshields
New Contributor II
  • 11 kudos

Hi Everyone,I'm seeing this issue as well - same configuration of the previous posts, using autoloader with incremental file listing turned on. The strange part is that it mostly works despite almost all of the files we're loading having colons incl...

  • 11 kudos
20 More Replies
Sandesh87
by New Contributor III
  • 6398 Views
  • 4 replies
  • 2 kudos

spark-streaming read from specific event hub partition

The azure event hub "my_event_hub" has a total of 5 partitions ("0", "1", "2", "3", "4")The readstream should only read events from partitions "0" and "4"event hub configuration as streaming source:-val name = "my_event_hub" val connectionString = "m...

  • 6398 Views
  • 4 replies
  • 2 kudos
Latest Reply
keshav
New Contributor II
  • 2 kudos

I tried using below snippet to receive messages only from partition id=0ehName = "<<EVENT-HUB-NAME>>"   # Create event position for partition 0 positionKey1 = { "ehName": ehName, "partitionId": 0 }   eventPosition1 = { "offset": "@latest", ...

  • 2 kudos
3 More Replies
databicky
by Contributor II
  • 4935 Views
  • 3 replies
  • 0 kudos

Resolved! how to add the background color to excel sheet by python

i just want to add color to excel sheet by python to specific cells, and i done that, but i need to exclude the header column, then if i tried the same method to other sheet it doesn't worked.​​but that bg color addition is reflected in one sheet but...

  • 4935 Views
  • 3 replies
  • 0 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 0 kudos

Convert your dataframe to pandas on sparkcolor cells using style property https://spark.apache.org/docs/latest/api/python/reference/pyspark.pandas/api/pyspark.pandas.DataFrame.style.htmlexport to excel using pandas to_excel https://spark.apache.org/d...

  • 0 kudos
2 More Replies
alvaro_databric
by New Contributor III
  • 1546 Views
  • 1 replies
  • 0 kudos

Relation between Driver and Executor size

HiI would like to ask for recommendations regarding the size of the driver and the amount of executors managed by that driver. I am aware of the best practices regarding executor size/number but I have doubts about the number of executors a single dr...

  • 1546 Views
  • 1 replies
  • 0 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 0 kudos

Depends on your use case. The best is to connect Datatog and see driver and workers utilization https://docs.datadoghq.com/integrations/databricks/?tab=driveronlyJust from my experience, Usually, for big datasets, when need autoscale workers between ...

  • 0 kudos
alvaro_databric
by New Contributor III
  • 5559 Views
  • 1 replies
  • 1 kudos

Resolved! Task time Spark UI

Hello all,I would like to know why task times (among other times in Spark UI) display values like 1h 2h when the task does only really take some seconds or minutes. What is the meaning of these high time values I see all around Spark UI.Thanks in adv...

  • 5559 Views
  • 1 replies
  • 1 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 1 kudos

that is accumulated time.https://stackoverflow.com/questions/73302982/task-time-and-gc-time-calculation-in-spark-ui-in-executor-section.

  • 1 kudos
bonyfus
by New Contributor II
  • 3628 Views
  • 3 replies
  • 0 kudos

Error when accessing the file from azure blob storage

I am getting the following error when accessing the file in Azure blob storagejava.io.FileNotFoundException: File /10433893690638/mnt/22200/22200Ver1.sps does not exist.Code:ves_blob = dbutils.widgets.get("ves_blob") try: dbutils.fs.ls(ves_blob ) e...

  • 3628 Views
  • 3 replies
  • 0 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 0 kudos

that is certainly an invalid path, as the error shows.with %fs ls /mnt you can show the directory structure of the /mnt directory, assuming the blob storage is mounted.if not, you need to define the access ( URL etc.)

  • 0 kudos
2 More Replies
lenonlmsv
by New Contributor II
  • 2610 Views
  • 3 replies
  • 0 kudos

Query API Result

Hi, I'm new here.Currently I have to read information from a query in databricks. I've used the query API to get the query definition but so far I'm not able to run the query and get the results.Is it possible? Thanks

  • 2610 Views
  • 3 replies
  • 0 kudos
Latest Reply
daniel_sahal
Esteemed Contributor
  • 0 kudos

When using the JobsAPI you need to specify dbutils.notebook.exit("returnValue") to pass the results once the notebook finished it's job (https://docs.databricks.com/notebooks/notebook-workflows.html#notebook-workflows-exit).Then you can get notebook_...

  • 0 kudos
2 More Replies
databicky
by Contributor II
  • 7430 Views
  • 6 replies
  • 1 kudos

Resolved! how to check dataframe column value

in my dataframe it have one column name like count, if that particular column value is greater than zero, the job needs to get failed, how can i perform that one?​

  • 7430 Views
  • 6 replies
  • 1 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 1 kudos

Code without collect, which should not be used in production:if df.filter("count > 0").count() > 0: dbutils.notebook.exit('Notebook Failed')you can also use a more aggressive version:if df.filter("count > 0").count() > 0: raise Exception("count bigge...

  • 1 kudos
5 More Replies
151640
by New Contributor III
  • 4320 Views
  • 4 replies
  • 3 kudos

Resolved! Is there a known issue regarding Databricks JDBC driver character values such as Japanese etc?

A Parquet file contains character data for various languages and is shown by the Data Explorer UX. A simple "select *" query using the Databricks JDBC driver (version 2.6.29) with a tool such as SQLSquirrel displays invalid characters.

image
  • 4320 Views
  • 4 replies
  • 3 kudos
Latest Reply
151640
New Contributor III
  • 3 kudos

The issue encountered has been confirmed to be a defect in the Databricks JDBC driver.

  • 3 kudos
3 More Replies
JD410993
by New Contributor II
  • 3301 Views
  • 3 replies
  • 2 kudos

Job runs indefinitely after integrating with PyDeequ

I'm using PyDeequ data quality checks in one of our jobs. After adding this check, I noticed that the job does not complete and keeps running indefinitely after PyDeequ checks are completed and results are returned.As stated in Pydeequ documentation ...

  • 3301 Views
  • 3 replies
  • 2 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 2 kudos

Hm, deequ certainly works as I have read about multiple people using it.And when reading the issues (open/closed) on the github pages of pydeequ, databricks is mentioned in some issues so it might be possible after all.But I think you need to check y...

  • 2 kudos
2 More Replies
KVNARK
by Honored Contributor II
  • 4475 Views
  • 4 replies
  • 6 kudos

Resolved! How to parameterize key of spark config in the job clusterlinked service from ADF

how can we parameterize key of the spark-config in the job cluster linked service from Azure datafactory, we can parameterize the values but any idea how can we parameterize the key so that when deploying to further environment it takes the PROD/QA v...

  • 4475 Views
  • 4 replies
  • 6 kudos
Latest Reply
daniel_sahal
Esteemed Contributor
  • 6 kudos

@KVNARK .​ You can use Databricks Secrets (create a Secret scope from AKV https://learn.microsoft.com/en-us/azure/databricks/security/secrets/secret-scopes) and then reference a secret in spark configuration (https://learn.microsoft.com/en-us/azure/d...

  • 6 kudos
3 More Replies

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels