cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

ChristianKeller
by New Contributor II
  • 16682 Views
  • 6 replies
  • 0 kudos

Two stage join fails with java.lang.UnsupportedOperationException: org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainLongDictionary

Sometimes the error is part of "org.apache.spark.SparkException: Exception thrown in awaitResult:". The error source is the step, where we extract the second time the rows, where the data is updated. We can count the rows, but we cannot display or w...

  • 16682 Views
  • 6 replies
  • 0 kudos
Latest Reply
activescott
New Contributor III
  • 0 kudos

Thanks Lleido. I eventually found I had changed the schema of a partitioned DataFrame that I had made inadvertently where I narrowed a column's type from a long to an integer. While rather obvious cause of the problem in hindsight it was terribly di...

  • 0 kudos
5 More Replies
FrancisLau
by New Contributor
  • 5165 Views
  • 2 replies
  • 0 kudos

Resolved! agg function not working for multiple aggregations

Data has 2 columns: |requestDate|requestDuration| | 2015-06-17| 104| Here is the code: avgSaveTimesByDate = gridSaves.groupBy(gridSaves.requestDate).agg({"requestDuration": "min", "requestDuration": "max","requestDuration": "avg"}) avgSaveTimesBy...

  • 5165 Views
  • 2 replies
  • 0 kudos
Latest Reply
ReKa
New Contributor III
  • 0 kudos

My guess is that the reason this may not work is the fact that the dictionary input does not have unique keys. With this syntax, column-names are keys and if you have two or more aggregation for the same column, some internal loops may forget the no...

  • 0 kudos
1 More Replies
Jean-FrancoisRa
by New Contributor
  • 4949 Views
  • 2 replies
  • 0 kudos

Resolved! Select dataframe columns from a sequence of string

Is there a simple way to select columns from a dataframe with a sequence of string? Something like val colNames = Seq("c1", "c2") df.select(colNames)

  • 4949 Views
  • 2 replies
  • 0 kudos
Latest Reply
vEdwardpc
New Contributor II
  • 0 kudos

Thanks. I needed to modify the final lines. val df_new = df.select(column_names_col:_*) df_new.show() Edward

  • 0 kudos
1 More Replies
dheeraj
by New Contributor II
  • 6546 Views
  • 3 replies
  • 0 kudos

How to calculate Percentile of column in a DataFrame in spark?

I am trying to calculate percentile of a column in a DataFrame? I cant find any percentile_approx function in Spark aggregation functions. For e.g. in Hive we have percentile_approx and we can use it in the following way hiveContext.sql("select per...

  • 6546 Views
  • 3 replies
  • 0 kudos
Latest Reply
amandaphy
New Contributor II
  • 0 kudos

You can try using df.registerTempTable("tmp_tbl") val newDF = sql(/ do something with tmp_tbl /)// and continue using newDF Learn More

  • 0 kudos
2 More Replies
cfregly
by Contributor
  • 7059 Views
  • 3 replies
  • 0 kudos
  • 7059 Views
  • 3 replies
  • 0 kudos
Latest Reply
easimadi
New Contributor II
  • 0 kudos

Hello Pls help (Not an Answer), How do I download complete csv (>1000) result file in FileStore unto my laptop? I was trying to follow this instruction set SQL tutorial (Download All SQL - scala)

  • 0 kudos
2 More Replies
Mallesh
by New Contributor
  • 11958 Views
  • 1 replies
  • 0 kudos

How can i read parquet file compressed by snappy?

Hi All, I wanted to read parqet file compressed by snappy into Spark RDD input file name is: part-m-00000.snappy.parquet i have used sqlContext.setConf("spark.sql.parquet.compression.codec.", "snappy") val inputRDD=sqlContext.parqetFile(args(0)) whe...

  • 11958 Views
  • 1 replies
  • 0 kudos
Latest Reply
raela
Databricks Employee
  • 0 kudos

Have you tried sqlContext.read.parquet("/filePath/") ?

  • 0 kudos
longcao
by New Contributor III
  • 19058 Views
  • 5 replies
  • 0 kudos

Resolved! Writing DataFrame to PostgreSQL via JDBC extremely slow (Spark 1.6.1)

Hi there,I'm just getting started with Spark and I've got a moderately sized DataFrame created from collating CSVs in S3 (88 columns, 860k rows) that seems to be taking an unreasonable amount of time to insert (using SaveMode.Append) into Postgres. I...

  • 19058 Views
  • 5 replies
  • 0 kudos
Latest Reply
longcao
New Contributor III
  • 0 kudos

In case anyone was curious how I worked around this, I ended up dropping down to Postgres JDBC and using CopyManager to COPY rows in directly from Spark: https://gist.github.com/longcao/bb61f1798ccbbfa4a0d7b76e49982f84

  • 0 kudos
4 More Replies
UmeshKacha
by New Contributor II
  • 12313 Views
  • 3 replies
  • 0 kudos

How to avoid empty/null keys in DataFrame groupby?

Hi I have Spark job which does group by and I cant avoid it because of my use case. I have large dataset around 1 TB which I need to process/update in DataFrame. Now my jobs shuffles huge data and slows things because of shuffling and groupby. One r...

  • 12313 Views
  • 3 replies
  • 0 kudos
Latest Reply
silvio
New Contributor II
  • 0 kudos

Hi Umesh,If you want to completely ignore the null/empty values then you could simply filter before you do the groupBy, but are you wanting to keep those values?If you want to keep the null values and avoid the skew, you could try splitting the DataF...

  • 0 kudos
2 More Replies
johnmcauley
by New Contributor II
  • 14061 Views
  • 2 replies
  • 0 kudos

How do I escape a query string in Spark SQL?

Hey all, I am trying to filter on a string but the string has a single quote - how do I escape the string in Scala? I have tried an old version of StringEscapeUtils but no luck. Sorry if a silly question - new to Scala.import org.apache.commons.lan...

  • 14061 Views
  • 2 replies
  • 0 kudos
Latest Reply
antoniosarco
New Contributor II
  • 0 kudos

generally when u deal with apostrophe u replace the the single quote(') with (''). More about....handling single quotes Antonio

  • 0 kudos
1 More Replies
MarcLimotte
by New Contributor II
  • 28138 Views
  • 12 replies
  • 0 kudos

Why do I get 'java.io.IOException: File already exists' for saveAsTable with Overwrite mode?

I have a fairly small, simple DataFrame, month:month.schema org.apache.spark.sql.types.StructType = StructType(StructField(month,DateType,true), StructField(real_month,TimestampType,true), StructField(month_millis,LongType,true))The month Dataframe i...

  • 28138 Views
  • 12 replies
  • 0 kudos
Latest Reply
ReKa
New Contributor III
  • 0 kudos

Your schema is tight, but make sure that the conversion to it does not throw an exception. Try with Memory Optimized Nodes, you may be fine. My problem was parsing a lot of data from sequence files containing 10K xml files and saving them as a table...

  • 0 kudos
11 More Replies
RobertWalsh
by New Contributor II
  • 23325 Views
  • 11 replies
  • 0 kudos

Dataframe Write Append to Parquet Table - Partition Issue

Hello, I am attempting to append new json files into an existing parquet table defined in Databricks. Using a dataset defined by this command (dataframe initially added to a temp table): val output = sql("select headers.event_name, to_date(from_unix...

0693f000007OoJYAA0 0693f000007OoJZAA0
  • 23325 Views
  • 11 replies
  • 0 kudos
Latest Reply
anil_s_langote
New Contributor II
  • 0 kudos

We came across similar situation we are using spark 1.6.1, we have a daily load process to pull data from oracle and write as parquet files, this works fine for 18 days of data (till 18th run), the problem comes after 19th run where the data frame l...

  • 0 kudos
10 More Replies
jpalbeza
by New Contributor II
  • 9104 Views
  • 3 replies
  • 0 kudos

Resolved! How to see the textbox input from getArgument() or dbutils.widgets.text() or dbutils.widgets.dropdown()

getArgument() has been deprecated. I don't see the text box for me to type in any input anymore. What I actually see though is the following error: Deprecation warning: Use dbutils.widgets.text() or dbutils.widgets.dropdown() to create a widget and...

  • 9104 Views
  • 3 replies
  • 0 kudos
Latest Reply
RyanJohnson
New Contributor II
  • 0 kudos

So shouldn't it be removed from the tutorial notebook showing how to connect to S3? I'm trying to connect to S3 for the first time and a deprecation warning isn't a pleasant first experience with a tool I am paying for.

  • 0 kudos
2 More Replies
Sri1
by New Contributor II
  • 13523 Views
  • 5 replies
  • 0 kudos

Create a in-memory table in Spark and insert data into it

Hi, My requirement is I need to create a Spark In-memory table (Not pushing hive table into memory) insert data into it and finally write that back to Hive table. Idea here is to avoid the disk IO while writing into Target Hive table. There are lot ...

  • 13523 Views
  • 5 replies
  • 0 kudos
Latest Reply
vida
Databricks Employee
  • 0 kudos

Got it - how about using a UnionAll? I believe this code snippet does what you'd want:from pyspark.sql import Row array = [Row(value=1), Row(value=2), Row(value=3)] df = sqlContext.createDataFrame(sc.parallelize(array)) array2 = [Row(value=4), Ro...

  • 0 kudos
4 More Replies
dan11
by New Contributor II
  • 4669 Views
  • 1 replies
  • 1 kudos

sql: how to convert datatype of column?

Bricklayers, I want to port this sql statement from sqlite to databricks: select cast(myage as number) as my_integer_age from ages; Does databricks allow me to do something like this?

  • 4669 Views
  • 1 replies
  • 1 kudos
Latest Reply
raela
Databricks Employee
  • 1 kudos

@dan11 We don't support number in Spark SQL. Try using int, double, float, and your query should be fine. To run SQL in a notebook, just prepend any cell with %sql. %sql select cast(myage as double) as my_integer_age from ages;

  • 1 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels