cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

petehart92
by New Contributor II
  • 5836 Views
  • 6 replies
  • 6 kudos

Error While Rendering Visualization -- Map (Markers)

I have a table with latitude and longitude for a few addresses (no more than 10 at the moment) but when I select the appropriate columns in the visualization editor for Map (Markers) I get an message that states "error while rendering visualization"....

Not a lot of detail...
  • 5836 Views
  • 6 replies
  • 6 kudos
Latest Reply
Gabi_A
New Contributor II
  • 6 kudos

Having the same issue. Every time I update my SQL, all the widgets drop and show the error 'Unable to render visualization'. The only way I found to fix is to manually duplicate all my widgets and delete the old ones with errors, which is a pain and ...

  • 6 kudos
5 More Replies
mbejarano89
by New Contributor III
  • 3324 Views
  • 2 replies
  • 0 kudos

Running a K-means (.fit) gives error:Params must be either a param map or a list/tuple of param maps but got %s." % type(params)

 am running a k-means algorithm. My feature are DoubleType and have no nulls, but I get : raise TypeError("Params must be either a param map or a list/tuple of param maps but got %s." % type(params). Anyone have any idea how to solve this?File /datab...

  • 3324 Views
  • 2 replies
  • 0 kudos
Latest Reply
mbejarano89
New Contributor III
  • 0 kudos

I found the answer just by trying several things, although I do not understand exactly what the problem was. All I had to do was to cache the input data before fitting the model:assemble=VectorAssembler(inputCols=columns_input, outputCol='features')...

  • 0 kudos
1 More Replies
Mado
by Valued Contributor II
  • 9689 Views
  • 3 replies
  • 0 kudos

How to update value of a column with MAP data-type in a delta table using a python dictionary and SQL UPDATE command?

I have a delta table created by:%sql   CREATE TABLE IF NOT EXISTS dev.bronze.test_map ( id INT, table_updates MAP<STRING, TIMESTAMP>,   CONSTRAINT test_map_pk PRIMARY KEY(id) ) USING DELTA LOCATION "abfss://bronze@Table Path"With initi...

image image.png image image
  • 9689 Views
  • 3 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @Mohammad Saber​ Thank you for your question! To assist you better, please take a moment to review the answer and let me know if it best fits your needs.Please help us select the best solution by clicking on "Select As Best" if it does.Your feedba...

  • 0 kudos
2 More Replies
Chris_Shehu
by Valued Contributor III
  • 1984 Views
  • 2 replies
  • 2 kudos

Map("skipRows", "1") ignored during autoloader process. Something wrong with the format?

I've tried multiple variations of the following code. It seems like the map parameters are being completely ignored. CREATE LIVE TABLE a_raw2 TBLPROPERTIES ("quality" = "bronze") AS SELECT * FROM cloud_files("dbfs:/mnt/c-raw/a/c_medcheck_export*.csv"...

  • 1984 Views
  • 2 replies
  • 2 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 2 kudos

skipRows was added in DBR 11.1 -- what DBR is your DLT pipeline on?

  • 2 kudos
1 More Replies
Anonymous
by Not applicable
  • 5075 Views
  • 6 replies
  • 5 kudos

COPY INTO command can not recognise MAP type value from JSON file

I have a delta table in Databricks with single column of type map<string, string> and I have a data file in JSON format created by Hive 3 for the table with thecolumn of same type. And I want to load data from file to Databricks's table using COPY IN...

  • 5075 Views
  • 6 replies
  • 5 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 5 kudos

Hi Alexey,Just a friendly follow-up. Did any of the responses help you to resolve your question? if it did, please mark it as best. Otherwise, please let us know if you still need help.

  • 5 kudos
5 More Replies
wyzer
by Contributor II
  • 2623 Views
  • 2 replies
  • 1 kudos

Resolved! Are we using the advantage of "Map & Reduce" ?

Hello,We are new on Databricks and we would like to know if our working method are good.Currently, we are working like this :spark.sql("CREATE TABLE Temp (SELECT avg(***), sum(***) FROM aaa LEFT JOIN bbb WHERE *** >= ***)")With this method, are we us...

  • 2623 Views
  • 2 replies
  • 1 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 1 kudos

Spark will handle the map/reduce for you.So as long as you use Spark provided functions, be it in scala, python or sql (or even R) you will be using distributed processing.You just care about what you want as a result.And afterwards when you are more...

  • 1 kudos
1 More Replies
Labels