cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Jasam
by New Contributor
  • 8497 Views
  • 3 replies
  • 0 kudos

how to infer csv schema default all columns like string using spark- csv?

I am using spark- csv utility, but I need when it infer schema all columns be transform in string columns by default. Thanks in advance.

  • 8497 Views
  • 3 replies
  • 0 kudos
Latest Reply
jhoop2002
New Contributor II
  • 0 kudos

@peyman what if I don't want to manually specify the schema? For example, I have a vendor that can't build a valid .csv file. I just need to import it somewhere so I can explore the data and find the errors. Just like the original author's question?...

  • 0 kudos
2 More Replies
NEERAJRATHORE19
by New Contributor
  • 9475 Views
  • 3 replies
  • 1 kudos

org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree: Exchange SinglePartition : Error

I am creating dataframe using SQL in which all the underline tables are actually tempview based on dataframes. I am getting below error everytime. Can anyone help me to uderstand the issue here. Thanks in advance.An error occurred while calling o183....

  • 9475 Views
  • 3 replies
  • 1 kudos
Latest Reply
htinhk
New Contributor II
  • 1 kudos

I also encountered the same problem...It's weird that I can do the query but not the count.

  • 1 kudos
2 More Replies
XinhHuynh
by New Contributor
  • 8547 Views
  • 3 replies
  • 0 kudos

How do you add user comments to a notebook?

This is shown in a recent blog post (Figure 5): https://databricks.com/blog/2015/06/04/simplify-machine-learning-on-spark-with-databricks.html

  • 8547 Views
  • 3 replies
  • 0 kudos
Latest Reply
Munna123
New Contributor II
  • 0 kudos

Using of mouse and touch pad is very annoying that's why Microsoft launch windows shortcut keys. shortcut keys of laptop This windows shortcut keys are used for avoiding the use of mouse and touch pad.

  • 0 kudos
2 More Replies
MatthewHo
by New Contributor
  • 6537 Views
  • 4 replies
  • 0 kudos

"Importing" functions from other notebooks

For the sake of organization, I would like to define a few functions in notebook A, and have notebook B have access to those functions in notebook A. Having everything in one notebook makes it look very cluttered. Is this possible?

  • 6537 Views
  • 4 replies
  • 0 kudos
Latest Reply
simone01
New Contributor II
  • 0 kudos

<a href="https://managementassignmentshelp.com/risk-management-assignment-help.php ">Risk Management Assignment Help </a> <a href="https://myassignmentmart.com/assignment/material-science-assignment-help.html "> Material Science assignment help </a>...

  • 0 kudos
3 More Replies
RaymondXie
by New Contributor
  • 6007 Views
  • 1 replies
  • 0 kudos

How to union multiple dataframe in pyspark within Databricks notebook

I have 4 DFs: Avg_OpenBy_Year, AvgHighBy_Year, AvgLowBy_Year and AvgClose_By_Year, all of them have a common column of 'Year'.I want to join the three together to get a final df like:`Year, Open, High, Low, Close`At the moment I have to use the ugly...

0693f000007OoI6AAK
  • 6007 Views
  • 1 replies
  • 0 kudos
Latest Reply
thiago_matos
New Contributor II
  • 0 kudos

Import reduce function in this way: from functools import reduce

  • 0 kudos
McKayHarris
by New Contributor II
  • 17595 Views
  • 17 replies
  • 3 kudos

ExecutorLostFailure: Remote RPC Client Disassociated

This is an expensive and long-running job that gets about halfway done before failing. The stack trace is included below, but here is the salient part: Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 4881 in stage...

  • 17595 Views
  • 17 replies
  • 3 kudos
Latest Reply
RodrigoDe_Freit
New Contributor II
  • 3 kudos

According to https://docs.databricks.com/jobs.html#jar-job-tips:"Job output, such as log output emitted to stdout, is subject to a 20MB size limit. If the total output has a larger size, the run will be canceled and marked as failed."That was my prob...

  • 3 kudos
16 More Replies
dtr
by New Contributor
  • 5266 Views
  • 1 replies
  • 0 kudos

PicklingError: Could not serialize object: Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers.

I am trying to write a function in Azure databricks. I would like to spark.sql inside the function. But it looks like I cannot use it with worker nodes. def SEL_ID(value, index): # some processing on value here ans = spark.sql("SELECT id FRO...

  • 5266 Views
  • 1 replies
  • 0 kudos
Latest Reply
MartinhoAzevedo
New Contributor II
  • 0 kudos

Hi there. i guess im a bit late but do you remember how and if you fixed this issue? im getting the same exact problem. @dtr

  • 0 kudos
HarisKhan
by New Contributor
  • 8953 Views
  • 2 replies
  • 0 kudos

Escape Backslash(/) while writing spark dataframe into csv

I am using spark version 2.4.0. I know that Backslash is default escape character in spark but still I am facing below issue. I am reading a csv file into a spark dataframe (using pyspark language) and writing back the dataframe into csv. I have so...

  • 8953 Views
  • 2 replies
  • 0 kudos
Latest Reply
sean_owen
Honored Contributor II
  • 0 kudos

I'm confused - you say the escape is backslash, but you show forward slashes in your data. Don't you want the escape to be forward slash?

  • 0 kudos
1 More Replies
User16826991422
by Contributor
  • 12351 Views
  • 12 replies
  • 0 kudos

Resolved! How do I create a single CSV file from multiple partitions in Databricks / Spark?

Using sparkcsv to write data to dbfs, which I plan to move to my laptop via standard s3 copy commands. The default for spark csv is to write output into partitions. I can force it to a single partition, but would really like to know if there is a ge...

  • 12351 Views
  • 12 replies
  • 0 kudos
Latest Reply
ChristianHomber
New Contributor II
  • 0 kudos

Without access to bash it would be highly appreciated if an option within databricks (e.g. via dbfsutils) existed.

  • 0 kudos
11 More Replies
tunguyen90
by New Contributor
  • 7346 Views
  • 3 replies
  • 1 kudos

How to change line separator for csv file exported from dataframe in databricks

Hello, Currently, I'm facing problem with line separator inside csv file, which is exported from data frame in Azure Databricks (version Spark 2.4.3) to Azure Blob storage. All those csv files contains LF as line-separator. I need to have CRLF (\r\n...

  • 7346 Views
  • 3 replies
  • 1 kudos
Latest Reply
Nikhila
New Contributor II
  • 1 kudos

Hi, Have you got the solution for above problem.Kindly let me know.

  • 1 kudos
2 More Replies
tismith1_558848
by New Contributor
  • 6745 Views
  • 2 replies
  • 0 kudos

Resolved! Change size or aspect ratio of ggplot visualizations

I understand that plots in R notebooks are captured by a png graphics device. Is there a way to set the size or the aspect ratio of the canvas? I understand that I can resize the rendered .png by dragging the handle in the notebook, but that means I...

  • 6745 Views
  • 2 replies
  • 0 kudos
Latest Reply
kassandra
New Contributor II
  • 0 kudos

Hi @sdaza​ , the answer above didn't change the size somehow, or perhaps I was putting it in the wrong place? I entered it in a new cell before the %sql cell with the plot chart. 

  • 0 kudos
1 More Replies
bhosskie
by New Contributor
  • 9386 Views
  • 9 replies
  • 0 kudos

How to merge two data frames column-wise in Apache Spark

I have the following two data frames which have just one column each and have exact same number of rows. How do I merge them so that I get a new data frame which has the two columns and all rows from both the data frames. For example, df1: +-----+...

  • 9386 Views
  • 9 replies
  • 0 kudos
Latest Reply
AmolZinjade
New Contributor II
  • 0 kudos

@bhosskie from pyspark.sql import SparkSession spark = SparkSession.builder.appName("Spark SQL basic example").enableHiveSupport().getOrCreate() sc = spark.sparkContext sqlDF1 = spark.sql("select count(*) as Total FROM user_summary") sqlDF2 = sp...

  • 0 kudos
8 More Replies
cfregly
by Contributor
  • 27234 Views
  • 9 replies
  • 0 kudos
  • 27234 Views
  • 9 replies
  • 0 kudos
Latest Reply
MichaelHuntsber
New Contributor II
  • 0 kudos

I have 8 GB of internal memory, but several MB of them are free but I also have an additional memory with an 8 GB memory card. Anyway, there is no enough space and the memory card is completely empty.essay service

  • 0 kudos
8 More Replies
nmud19
by New Contributor II
  • 58965 Views
  • 8 replies
  • 6 kudos

how to delete a folder in databricks mnt?

I have a folder at location dbfs:/mnt/temp I need to delete this folder. I tried using %fs rm mnt/temp & dbutils.fs.rm("mnt/temp") Could you please help me out with what I am doing wrong?

  • 58965 Views
  • 8 replies
  • 6 kudos
Latest Reply
amitca71
Contributor II
  • 6 kudos

use this (last raw should not be indented twice...): def delete_mounted_dir(dirname): files=dbutils.fs.ls(dirname) for f in files: if f.isDir(): delete_mounted_dir(f.path) dbutils.fs.rm(f.path, recurse=True)

  • 6 kudos
7 More Replies
vamsivarun007
by New Contributor II
  • 26968 Views
  • 2 replies
  • 2 kudos

Driver is up but is not responsive, likely due to GC.

Hi all, "Driver is up but is not responsive, likely due to GC." This is the message in cluster event logs. Can anyone help me with this. What does GC means? Garbage collection? Can we control it externally?

  • 26968 Views
  • 2 replies
  • 2 kudos
Latest Reply
Carlos_AlbertoG
New Contributor II
  • 2 kudos

spark.catalog.clearCache() solve the problem for me

  • 2 kudos
1 More Replies
Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!

Labels
Top Kudoed Authors