cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Data_Cowboy
by New Contributor III
  • 11949 Views
  • 3 replies
  • 1 kudos

Resolved! Plotting in pyspark.pandas Uncaught ReferenceError Plotly is not defined

Hi, I am trying to plot using pyspark.pandas running this sample code: speed = [0.1, 17.5, 40, 48, 52, 69, 88] lifespan = [2, 8, 70, 1.5, 25, 12, 28] index = ['snail', 'pig', 'elephant', 'rabbit', 'giraffe', 'coyote', 'horse'] psdf = ps.Data...

Error Message
  • 11949 Views
  • 3 replies
  • 1 kudos
Latest Reply
Data_Cowboy
New Contributor III
  • 1 kudos

Thank you @Werner Stinckens​ . I was able to find the plotly documentation listed below and setting the output_type and calling displayHTML() helped remedy the error.

  • 1 kudos
2 More Replies
arda_123
by New Contributor III
  • 4237 Views
  • 2 replies
  • 1 kudos

SQL Analytics Map Visualization: Map marker size

Hello all, I am trying to use the Map visualization in SQL Analytics Dashboard in Databricks. Does any one knows how or if we can change the size/radius of the markers based on values in another column. This seems like a very trivial parameter but I ...

  • 4237 Views
  • 2 replies
  • 1 kudos
Latest Reply
arda_123
New Contributor III
  • 1 kudos

Thanks @Kaniz Fatma​ 

  • 1 kudos
1 More Replies
laurencewells
by New Contributor III
  • 6663 Views
  • 5 replies
  • 1 kudos

Autoloader and "cleanSource"

Hi All, We are trying to use the Spark 3 structured streaming feature/option ".option('cleanSource','archive')" to archive processed files. This is working as expected using the standard spark implementation, however does not appear to work using aut...

  • 6663 Views
  • 5 replies
  • 1 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 1 kudos

https://docs.databricks.com/ingestion/auto-loader/options.html#common-auto-loader-optionscleanSource is not a listed option so it won't do anything.Maybe event retention is something you can use?

  • 1 kudos
4 More Replies
RiyazAliM
by Honored Contributor
  • 8619 Views
  • 3 replies
  • 3 kudos

Is there a way to CONCAT two dataframes on either of the axis (row/column) and transpose the dataframe in PySpark?

I'm reshaping my dataframe as per requirement and I came across this situation where I'm concatenating 2 dataframes and then transposing them. I've done this previously using pandas and the syntax for pandas goes as below:import pandas as pd   df1 = ...

  • 8619 Views
  • 3 replies
  • 3 kudos
Latest Reply
RiyazAliM
Honored Contributor
  • 3 kudos

Hi @Kaniz Fatma​ ,I no longer see the answer you've posted, but I see you were suggesting to use `union`. As per my understanding, union are used to stack the dfs one upon another with similar schema / column names.In my situation, I have 2 different...

  • 3 kudos
2 More Replies
Maverick1
by Valued Contributor II
  • 10001 Views
  • 3 replies
  • 6 kudos

Is there any way to overwrite a partition in delta table without specifying each and every partition in replace where? For non dated partitions, this is really a mess with delta tables.

Is there any way to overwrite a partition in delta table without specifying each and every partition in replace where. For non dated partitions, this is really a mess with delta tables.Most of my DE teams don't want to adopt delta because of these gl...

  • 10001 Views
  • 3 replies
  • 6 kudos
Latest Reply
Anonymous
Not applicable
  • 6 kudos

Hi @Saurabh Verma​ following up did you get a chance to check @Hubert Dudek​ previous comments ?

  • 6 kudos
2 More Replies
Anonymous
by Not applicable
  • 2684 Views
  • 1 replies
  • 1 kudos

Query silently failed

Hello all, I'm using the older 6.4 runtime and noticed that a query return no result whereas the same query on 10.4 provided the expected result. This is bad, because I got no error, simply no result at all.Is there is some spark settings on the clus...

  • 2684 Views
  • 1 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hi @Alessio Palma​ following up did you get chance to check @Kaniz Fatma​ 's previous comments ?

  • 1 kudos
Jack
by New Contributor II
  • 6476 Views
  • 1 replies
  • 1 kudos

Append an empty dataframe to a list of dataframes using for loop in python

I have the following 3 dataframes:I want to append df_forecast to each of df2_CA and df2_USA using a for-loop. However when I run my code, df_forecast is not appending: df2_CA and df2_USA appear exactly as shown above.Here’s the code:df_list=[df2_CA,...

image image
  • 6476 Views
  • 1 replies
  • 1 kudos
Latest Reply
User16764241763
Databricks Employee
  • 1 kudos

@Jack Homareau​  Can you try union functionality with dataframes?https://sparkbyexamples.com/pyspark/pyspark-union-and-unionall/and then try to fill NaNs with the desired values?

  • 1 kudos
VM
by Contributor
  • 6539 Views
  • 4 replies
  • 2 kudos

Error using Synapse ML: JavaPackage object is not callable

I am using DBR version 10.1. I want to use Synapse ML package. I am able to install and import it by following instructions on the link: https://github.com/microsoft/SynapseML. However when I try to run the code it gives me the error shown in the att...

  • 6539 Views
  • 4 replies
  • 2 kudos
Latest Reply
User16764241763
Databricks Employee
  • 2 kudos

Hello @Vikram Mahawal​ Clusters need to be in the running state to install/uninstall the libraries. Could you please start the cluster and try installing it.If you are still stuck, please file a support case with us, so we can take a look.Thanks

  • 2 kudos
3 More Replies
Vadim1
by New Contributor III
  • 4269 Views
  • 3 replies
  • 1 kudos

Resolved! Connect from Databricks to Hbase HDinsight cluster.

Hi, I have Databricks installation in Azure. I want to run a job that connects to HBase in a separate HDinsight cluster.What I tried:Created a peering between base cluster and Databricks vNets.I can ping IPs of Hbase zookeeper nodes but I cannot acce...

  • 4269 Views
  • 3 replies
  • 1 kudos
Latest Reply
User16764241763
Databricks Employee
  • 1 kudos

Vadim, Thank you for the response. Appreciate it.

  • 1 kudos
2 More Replies
lizou
by Contributor III
  • 2346 Views
  • 2 replies
  • 2 kudos

Merge into and data loss

I have a delta table with 20 M rows, Ther table is being updated dozens of times per day. The merge into is used, and the merge works fine for 1 year. But recently I begin notice some of data is deleted from merge into without delete specified. Mer...

  • 2346 Views
  • 2 replies
  • 2 kudos
Latest Reply
lizou
Contributor III
  • 2 kudos

I can't reproduce the issue anymore. for now, I am going to limit the number of merge into commands as intermediate data transformation does not need versioning history. I am going to try to use combined views for each step, and do a one-time merge i...

  • 2 kudos
1 More Replies
shan_chandra
by Databricks Employee
  • 6891 Views
  • 1 replies
  • 1 kudos

Resolved! Insert query fails with error "The query is not executed because it tries to launch ***** tasks in a single stage, while maximum allowed tasks one query can launch is 100000;

Py4JJavaError: An error occurred while calling o236.sql. : org.apache.spark.SparkException: Job aborted. at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:201) at org.apache.spark.sql.execution.datasources.I...

  • 6891 Views
  • 1 replies
  • 1 kudos
Latest Reply
shan_chandra
Databricks Employee
  • 1 kudos

could you please increase the below config (at the cluster level) to a higher value or set it to zero spark.databricks.queryWatchdog.maxQueryTasks 0The spark config while it alleviates the issue.

  • 1 kudos
PradeepRavi
by New Contributor III
  • 44086 Views
  • 6 replies
  • 10 kudos

How do I prevent _success and _committed files in my write output?

Is there a way to prevent the _success and _committed files in my output. It's a tedious task to navigate to all the partitions and delete the files. Note : Final output is stored in Azure ADLS

  • 44086 Views
  • 6 replies
  • 10 kudos
Latest Reply
shan_chandra
Databricks Employee
  • 10 kudos

Please find the below steps to remove _SUCCESS, _committed and _started files.spark.conf.set("spark.databricks.io.directoryCommit.createSuccessFile","false") to remove success file.run vacuum command multiple times until _committed and _started files...

  • 10 kudos
5 More Replies
auser85
by New Contributor III
  • 4128 Views
  • 3 replies
  • 1 kudos

dbutils.notebook.run() fails with job aborted but running the notebook individually works

I have a notebook that runs many notebooks in order, along the lines of:```%pythonnotebook_list = ['Notebook1', 'Notebook2']   for notebook in notebook_list:  print(f"Now on Notebook: {notebook}")  try:    dbutils.notebook.run(f'{notebook}', 3600)  e...

  • 4128 Views
  • 3 replies
  • 1 kudos
Latest Reply
auser85
New Contributor III
  • 1 kudos

I found the problem. Even if a notebook creates and specifies a widget fully, the notebook run process, e.g, dbutils.notebook.run('notebook') will not know how to use it. If I replace my widget with a non-widget provided value, the process works fine...

  • 1 kudos
2 More Replies
pieseautoford
by New Contributor
  • 878 Views
  • 0 replies
  • 0 kudos

www.pieseford.ro

Hi, my name is Jerry Maguire and I`m automatic engineer at Piese Ford. Piese originale Ford Fiesta 2008-2012

  • 878 Views
  • 0 replies
  • 0 kudos
Labels