cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Matt_Johnston
by New Contributor III
  • 8658 Views
  • 4 replies
  • 4 kudos

Resolved! Disk Type in Azure Databricks

Hi There,How are the disks tiers determined in Azure Databricks? We are currently using a pool which is using Standard DS3 v2 Virtual Machines, all with Premium SSD disks. Is there a way to change the tier of the disks?Thanks

  • 8658 Views
  • 4 replies
  • 4 kudos
Latest Reply
Atanu
Databricks Employee
  • 4 kudos

I think we do not have option to change the disk type at this moment. but I would like to request you to raise a feature request through azure support if you are azure databricks user. if aws you can do the same from - https://docs.databricks.com/res...

  • 4 kudos
3 More Replies
Shridhar
by New Contributor
  • 18531 Views
  • 2 replies
  • 2 kudos

Resolved! Load multiple csv files into a dataframe in order

I can load multiple csv files by doing something like: paths = ["file_1", "file_2", "file_3"] df = sqlContext.read .format("com.databricks.spark.csv") .option("header", "true") .load(paths) But this doesn't seem to preserve the...

  • 18531 Views
  • 2 replies
  • 2 kudos
Latest Reply
Jaswanth_Saniko
New Contributor III
  • 2 kudos

val diamonds = spark.read.format("csv") .option("header", "true") .option("inferSchema", "true") .load("/FileStore/tables/11.csv","/FileStore/tables/12.csv","/FileStore/tables/13.csv")   display(diamonds)This is working for me @Shridhar​ 

  • 2 kudos
1 More Replies
Itachi_Naruto
by New Contributor II
  • 10624 Views
  • 3 replies
  • 0 kudos

hdbscan package error

I try to import **hdbscan** but it throws this following error/databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 156 # Import the desired module. ...

  • 10624 Views
  • 3 replies
  • 0 kudos
Latest Reply
Atanu
Databricks Employee
  • 0 kudos

does this help @Rajamannar Aanjaram​ ?

  • 0 kudos
2 More Replies
Reza
by New Contributor III
  • 3728 Views
  • 1 replies
  • 0 kudos

Resolved! Can we order the widgets?

I have two text widgets (dbutils.widgets.text). One is called "start date" and another one is "end date". When I create them, they will be shown in alphabetic order (end_date, start_date). Is there any way that we can set the order when we create the...

  • 3728 Views
  • 1 replies
  • 0 kudos
Latest Reply
Atanu
Databricks Employee
  • 0 kudos

https://docs.databricks.com/notebooks/widgets.html all options available here I think. @Reza Rajabi​  , but we can crosscheck

  • 0 kudos
timothy_uk
by New Contributor III
  • 3895 Views
  • 3 replies
  • 0 kudos

Zombie .Net Spark Databricks Job (CourseGrainedExecutorBackend)

Hi all,Environment:Nodes: Standard_E8s_v3Databricks Runtime: 9.0.NET for Apache Spark 2.0.0I'm invoking spark submit to run a .Net Spark job hosted in Azure Databricks. The job is written in C#.Net with its only transformation and action, reading a C...

  • 3895 Views
  • 3 replies
  • 0 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 0 kudos

Hi @Timothy Lin​ ,I will recommend to not use spark.stop() or System.exit(0) in your code because it will explicitly stop the Spark context but the graceful shutdown and handshake with databricks' job service does not happen.

  • 0 kudos
2 More Replies
anthony_cros
by New Contributor
  • 4941 Views
  • 1 replies
  • 0 kudos

How to publish a notebook in order to share its URL, as a Premium Plan user?

Hi,I'm a Premium Plan user and am trying to share a notebook via URL.The link at https://docs.databricks.com/notebooks/notebooks-manage.html#publish-a-notebook states: "If you’re using Community Edition, you can publish a notebook so that you can sha...

  • 4941 Views
  • 1 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hello @Anthony Cros​ - My name is Piper, and I'm a moderator for Databricks. Welcome and thank you for your question. We will give the members some time to answer your question. If needed, we will circle back around later.

  • 0 kudos
Braxx
by Contributor II
  • 12416 Views
  • 3 replies
  • 3 kudos

Resolved! spark.read excel with formula

For some reason spark is not reading the data correctly from xlsx file in the column with a formula. I am reading it from a blob storage.Consider this simple data set The column "color" has formulas for all the cells like=VLOOKUP(A4,C3:D5,2,0)In case...

image.png image.png
  • 12416 Views
  • 3 replies
  • 3 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 3 kudos

the formula itself isprobably what is actually stored in the excel file.Excel translates this to NA.I only know of setErrorCellsToFallbackValues but I doubt if this is applicable in your case here.You could use a matching function (regexp f.e.) to d...

  • 3 kudos
2 More Replies
chandan_a_v
by Valued Contributor
  • 5166 Views
  • 4 replies
  • 3 kudos

Resolved! Spark Error : RScript (1243) terminated unexpectedly: Cannot call r___RBuffer__initialize().

grid_slice %>% sdf_copy_to(  sc = sc,  name = "grid_slice",  overwrite = TRUE ) %>% sdf_repartition(  partitions = min(n_executors * 3, NROW(grid_slice)),  partition_by = "variable" ) %>% spark_apply(  f = slice_data_wrapper,  columns = c(   variable...

  • 5166 Views
  • 4 replies
  • 3 kudos
Latest Reply
chandan_a_v
Valued Contributor
  • 3 kudos

Hi @Kaniz FatmaDid you find any solution? Please let us know

  • 3 kudos
3 More Replies
RiyazAliM
by Honored Contributor
  • 6671 Views
  • 2 replies
  • 3 kudos

Resolved! Where does the files downloaded from wget get stored in Databricks?

Hey Team!All I'm trying is to download a csv file stored on S3 and read it using Spark.Here's what I mean:!wget https://s3.amazonaws.com/nyc-tlc/trip+data/yellow_tripdata_2020-01.csvIf i download this "yellow_tripdata_2020-01.csv" where exactly it wo...

  • 6671 Views
  • 2 replies
  • 3 kudos
Latest Reply
RiyazAliM
Honored Contributor
  • 3 kudos

Hi @Kaniz Fatma​ , thanks for the remainder.Hey @Hubert Dudek​ - thank you very much for your prompt response.Initially, I was using urllib3 to 'GET' the data residing in the URL. So, I wanted an alternative for the same. Unfortunately, requests libr...

  • 3 kudos
1 More Replies
GlenLewis
by New Contributor III
  • 6118 Views
  • 3 replies
  • 0 kudos

Resolved! Markup and table of contents is no longer working on Notebooks

Around 2 days ago, Markdown in our notebooks stopped working (the %md tag isn't visible but the headings appear as #Heading1. In addition, there are no longer any table of contents on any of my workbooks. Trying a different instance in Microsoft Az...

  • 6118 Views
  • 3 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

@Glen Lewis​ - Thank you for coming to the community with this. Would you be happy to mark your answer as best so other members can find the solution more readily?

  • 0 kudos
2 More Replies
Braxx
by Contributor II
  • 12764 Views
  • 4 replies
  • 2 kudos

Resolved! issue with group by

I am trying to group by a data frame by "PRODUCT", "MARKET" and aggregate the rest ones specified in col_list. There are much more column in the list but for simplification lets take the example below.Unfortunatelly I am getting the error:"TypeError:...

  • 12764 Views
  • 4 replies
  • 2 kudos
Latest Reply
Pholo
Contributor
  • 2 kudos

Hi @Shivers Robert​ Try to use something like thatimport pyspark.sql.functions as F   def year_sum(year, column_year, column_sum): return F.when( F.col(column_year) == year, F.col(column_sum) ).otherwise(F.lit(None)) display(df.select(*[F....

  • 2 kudos
3 More Replies
sonali1996
by New Contributor
  • 3498 Views
  • 0 replies
  • 0 kudos

Multithreading in SCALA DATABRICKS

Hi Team, I was trying to call/run multiple notebooks in one notebook concurrent. But the caller notebooks are getting executing one by one whereas I need to run all the caller notebooks concurrently. I have also tried using Threading in Scala Databri...

  • 3498 Views
  • 0 replies
  • 0 kudos
saltuk
by Contributor
  • 2790 Views
  • 0 replies
  • 0 kudos

Using Parquet, passing Partition on Insert Overwrite. Partition parenthesis includes equitation and it gives an error.

I am new on Spark sql, we are migrating our Cloudera to Databricks. there are a lot of SQLs done, only a few are on going. We are having some troubles during passing an argument and using it in an equitation on Partition section. LOGDATE is an argu...

  • 2790 Views
  • 0 replies
  • 0 kudos
Oricus_semicon
by New Contributor
  • 875 Views
  • 0 replies
  • 0 kudos

oricus-semicon.com

Oricus Semicon Solutions is an innovative Semiconductor Tools manufacturing company who, with almost 100 years of collective expertise, craft high tech bespoke tooling solutions for the global Semiconductor Assembly and Test industry.https://oricus-s...

  • 875 Views
  • 0 replies
  • 0 kudos
Labels