cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Matt_Johnston
by New Contributor III
  • 2420 Views
  • 4 replies
  • 4 kudos

Resolved! Disk Type in Azure Databricks

Hi There,How are the disks tiers determined in Azure Databricks? We are currently using a pool which is using Standard DS3 v2 Virtual Machines, all with Premium SSD disks. Is there a way to change the tier of the disks?Thanks

  • 2420 Views
  • 4 replies
  • 4 kudos
Latest Reply
Atanu
Esteemed Contributor
  • 4 kudos

I think we do not have option to change the disk type at this moment. but I would like to request you to raise a feature request through azure support if you are azure databricks user. if aws you can do the same from - https://docs.databricks.com/res...

  • 4 kudos
3 More Replies
Shridhar
by New Contributor
  • 12140 Views
  • 2 replies
  • 2 kudos

Resolved! Load multiple csv files into a dataframe in order

I can load multiple csv files by doing something like: paths = ["file_1", "file_2", "file_3"] df = sqlContext.read .format("com.databricks.spark.csv") .option("header", "true") .load(paths) But this doesn't seem to preserve the...

  • 12140 Views
  • 2 replies
  • 2 kudos
Latest Reply
Jaswanth_Saniko
New Contributor III
  • 2 kudos

val diamonds = spark.read.format("csv") .option("header", "true") .option("inferSchema", "true") .load("/FileStore/tables/11.csv","/FileStore/tables/12.csv","/FileStore/tables/13.csv")   display(diamonds)This is working for me @Shridhar​ 

  • 2 kudos
1 More Replies
Reza
by New Contributor III
  • 2151 Views
  • 2 replies
  • 0 kudos

Resolved! Can we order the widgets?

I have two text widgets (dbutils.widgets.text). One is called "start date" and another one is "end date". When I create them, they will be shown in alphabetic order (end_date, start_date). Is there any way that we can set the order when we create the...

  • 2151 Views
  • 2 replies
  • 0 kudos
Latest Reply
Atanu
Esteemed Contributor
  • 0 kudos

https://docs.databricks.com/notebooks/widgets.html all options available here I think. @Reza Rajabi​  , but we can crosscheck

  • 0 kudos
1 More Replies
timothy_uk
by New Contributor III
  • 1409 Views
  • 4 replies
  • 0 kudos

Resolved! Zombie .Net Spark Databricks Job (CourseGrainedExecutorBackend)

Hi all,Environment:Nodes: Standard_E8s_v3Databricks Runtime: 9.0.NET for Apache Spark 2.0.0I'm invoking spark submit to run a .Net Spark job hosted in Azure Databricks. The job is written in C#.Net with its only transformation and action, reading a C...

  • 1409 Views
  • 4 replies
  • 0 kudos
Latest Reply
jose_gonzalez
Moderator
  • 0 kudos

Hi @Timothy Lin​ ,I will recommend to not use spark.stop() or System.exit(0) in your code because it will explicitly stop the Spark context but the graceful shutdown and handshake with databricks' job service does not happen.

  • 0 kudos
3 More Replies
Braxx
by Contributor II
  • 3022 Views
  • 4 replies
  • 3 kudos

Resolved! spark.read excel with formula

For some reason spark is not reading the data correctly from xlsx file in the column with a formula. I am reading it from a blob storage.Consider this simple data set The column "color" has formulas for all the cells like=VLOOKUP(A4,C3:D5,2,0)In case...

image.png image.png
  • 3022 Views
  • 4 replies
  • 3 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 3 kudos

the formula itself isprobably what is actually stored in the excel file.Excel translates this to NA.I only know of setErrorCellsToFallbackValues but I doubt if this is applicable in your case here.You could use a matching function (regexp f.e.) to d...

  • 3 kudos
3 More Replies
chandan_a_v
by Valued Contributor
  • 1801 Views
  • 8 replies
  • 4 kudos

Resolved! Spark Error : RScript (1243) terminated unexpectedly: Cannot call r___RBuffer__initialize().

grid_slice %>% sdf_copy_to(  sc = sc,  name = "grid_slice",  overwrite = TRUE ) %>% sdf_repartition(  partitions = min(n_executors * 3, NROW(grid_slice)),  partition_by = "variable" ) %>% spark_apply(  f = slice_data_wrapper,  columns = c(   variable...

  • 1801 Views
  • 8 replies
  • 4 kudos
Latest Reply
chandan_a_v
Valued Contributor
  • 4 kudos

Hi @Kaniz FatmaDid you find any solution? Please let us know

  • 4 kudos
7 More Replies
RiyazAli
by Contributor III
  • 2597 Views
  • 4 replies
  • 3 kudos

Resolved! Where does the files downloaded from wget get stored in Databricks?

Hey Team!All I'm trying is to download a csv file stored on S3 and read it using Spark.Here's what I mean:!wget https://s3.amazonaws.com/nyc-tlc/trip+data/yellow_tripdata_2020-01.csvIf i download this "yellow_tripdata_2020-01.csv" where exactly it wo...

  • 2597 Views
  • 4 replies
  • 3 kudos
Latest Reply
RiyazAli
Contributor III
  • 3 kudos

Hi @Kaniz Fatma​ , thanks for the remainder.Hey @Hubert Dudek​ - thank you very much for your prompt response.Initially, I was using urllib3 to 'GET' the data residing in the URL. So, I wanted an alternative for the same. Unfortunately, requests libr...

  • 3 kudos
3 More Replies
TheDataDexter
by New Contributor III
  • 2020 Views
  • 3 replies
  • 3 kudos

Resolved! Single-Node cluster works but Multi-Node clusters do not read data.

I am currently working with a VNET injected databricks workspace. At the moment I have mounted a the databricks cluster on an ADLS G2 resource. When running notebooks on a single node that read, transform, and write data we do not encounter any probl...

  • 2020 Views
  • 3 replies
  • 3 kudos
Latest Reply
TheDataDexter
New Contributor III
  • 3 kudos

@Werner Stinckens​ thank you for your reply. I will take a look into the netwerk configurations today.

  • 3 kudos
2 More Replies
GlenLewis
by New Contributor III
  • 2508 Views
  • 3 replies
  • 0 kudos

Resolved! Markup and table of contents is no longer working on Notebooks

Around 2 days ago, Markdown in our notebooks stopped working (the %md tag isn't visible but the headings appear as #Heading1. In addition, there are no longer any table of contents on any of my workbooks. Trying a different instance in Microsoft Az...

  • 2508 Views
  • 3 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

@Glen Lewis​ - Thank you for coming to the community with this. Would you be happy to mark your answer as best so other members can find the solution more readily?

  • 0 kudos
2 More Replies
saltuk
by Contributor
  • 830 Views
  • 0 replies
  • 0 kudos

Using Parquet, passing Partition on Insert Overwrite. Partition parenthesis includes equitation and it gives an error.

I am new on Spark sql, we are migrating our Cloudera to Databricks. there are a lot of SQLs done, only a few are on going. We are having some troubles during passing an argument and using it in an equitation on Partition section. LOGDATE is an argu...

  • 830 Views
  • 0 replies
  • 0 kudos
Oricus_semicon
by New Contributor
  • 221 Views
  • 0 replies
  • 0 kudos

oricus-semicon.com

Oricus Semicon Solutions is an innovative Semiconductor Tools manufacturing company who, with almost 100 years of collective expertise, craft high tech bespoke tooling solutions for the global Semiconductor Assembly and Test industry.https://oricus-s...

  • 221 Views
  • 0 replies
  • 0 kudos
SankaraiahNaray
by New Contributor II
  • 19627 Views
  • 10 replies
  • 6 kudos

Resolved! Not able to read text file from local file path - Spark CSV reader

We are using Spark CSV reader to read the csv file to convert as DataFrame and we are running the job on yarn-client, its working fine in local mode. We are submitting the spark job in edge node. But when we place the file in local file path instead...

  • 19627 Views
  • 10 replies
  • 6 kudos
Latest Reply
Kaniz
Community Manager
  • 6 kudos

Hi @Sankaraiah Narayanasamy​ , Seems like a bug in spark-shell command when reading a local file, But there is a workaround while running spark-submit command just specify in the command.--conf "spark.authenticate=false"SPARK-23476 for reference.

  • 6 kudos
9 More Replies
chaitanya
by New Contributor II
  • 2247 Views
  • 3 replies
  • 4 kudos

Resolved! While loading Data from blob to delta lake facing below issue

I'm calling the stored proc then store into pandas dataframe then creating list while creating list getting below error Databricks execution failed with error state Terminated. For more details please check the run page url: path An error occurred w...

  • 2247 Views
  • 3 replies
  • 4 kudos
Latest Reply
shan_chandra
Honored Contributor III
  • 4 kudos

@chaitanya​ , could you please try disabling arrow optimization and see if this resolves the issue?spark.sql.execution.arrow.enabled falsespark.sql.execution.arrow.pyspark.enabled false

  • 4 kudos
2 More Replies
sanjoydas6
by New Contributor III
  • 3968 Views
  • 15 replies
  • 3 kudos

Resolved! Problem faced while trying to Reset my Community Edition Password

I have forgotten my Databricks Community Edition Password and is trying to Reset the same using the Forgot Password link. It is saying that an Email will be sent with the link to reset the password but the Email is not coming. However Databricks mail...

  • 3968 Views
  • 15 replies
  • 3 kudos
Latest Reply
Kaniz
Community Manager
  • 3 kudos

Hi @Sanjoy Das​ , We could not find an account associated to your email. Did you pass the correct email or did you delete your account? Can you please create a CE account or pass the correct email address over mail with which you used to browse the C...

  • 3 kudos
14 More Replies
maranBH
by New Contributor III
  • 1070 Views
  • 4 replies
  • 1 kudos

Resolved! Trained model artifact, CI/CD and Databricks without MLFlow.

Hi all,We are constructing our CI/CD pipelines with the Repos feature following this guide:https://databricks.com/blog/2021/09/20/part-1-implementing-ci-cd-on-databricks-using-databricks-notebooks-and-azure-devops.htmlI'm trying to implement my pipes...

  • 1070 Views
  • 4 replies
  • 1 kudos
Latest Reply
sean_owen
Honored Contributor II
  • 1 kudos

So you are managing your models with MLflow, and want to include them in a git repository?You can do that in a CI/CD process; it would run the mlflow CLI to copy the model you want (e.g. model:/my_model/production) to a git checkout and then commit i...

  • 1 kudos
3 More Replies
Labels
Top Kudoed Authors