cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Create a in-memory table in Spark and insert data into it

Sri1
New Contributor II

Hi,

My requirement is I need to create a Spark In-memory table (Not pushing hive table into memory) insert data into it and finally write that back to Hive table.

Idea here is to avoid the disk IO while writing into Target Hive table. There are lot of insert statements but I want to write that back to Hive table only after all execution is over.

Could you please let me know if that is possible. Please let me know if you have better solution

Thanks & Regards,

~Sri

5 REPLIES 5

vida
Databricks Employee
Databricks Employee

1) Use sc.parallelize to create the table.

2) Register just a temporary table.

3) You can keep adding insert statements into this table. Note that Spark SQL supports inserting from other tables. So again, you might need to create temporary tables to insert into the first temporary table.

This table should not write out to disk until you run a "saveAsTable" or other type of command.

-Vida

Sri1
New Contributor II

Hi Vida,

Sorry for the late reply.

I tried creating 2 similar temp tables in spark based on a Hive table, 1 had data other one is empty

when I try to insert into empty table I get the below error

org.apache.spark.sql.AnalysisException: Inserting into an RDD-based table is not allowed.;

Please correct me if there are any issues with this approach. I tried exactly same as you said except I created the temp tables out of a Hive table

Regards,

~Sri

vida
Databricks Employee
Databricks Employee

Got it - how about using a UnionAll? I believe this code snippet does what you'd want:

from pyspark.sql import Row

array = [Row(value=1), Row(value=2), Row(value=3)] df = sqlContext.createDataFrame(sc.parallelize(array))

array2 = [Row(value=4), Row(value=5), Row(value=6)] df2 = sqlContext.createDataFrame(sc.parallelize(array2))

two_tables = df.unionAll(df2) two_tables.collect()

>> Out[17]: [Row(value=1), Row(value=2), Row(value=3), Row(value=4), Row(value=5), Row(value=6)]

Sri1
New Contributor II

Vida,

Thank you very much for your help

That works good but problem is I have to insert data from multiple queries. I have to declare collection of dataframes to store data from each query at the end I can union all dataframes and insert into an Hive table.

I tried to create collection of dataframe in scala , I am new to scala still struggling

Please let me know the syntax for declaring collection/array of dataframes

Regards,

~Sri

Hi Sri,

It's probably worth your time going through some DataFrame tutorials. Here's a good one from us on the basics of DataFrames. This material should help you get a sense for how you might create a collection of DataFrames and learn a bit more of the scala nuances!

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group