cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Create a in-memory table in Spark and insert data into it

Sri1
New Contributor II

Hi,

My requirement is I need to create a Spark In-memory table (Not pushing hive table into memory) insert data into it and finally write that back to Hive table.

Idea here is to avoid the disk IO while writing into Target Hive table. There are lot of insert statements but I want to write that back to Hive table only after all execution is over.

Could you please let me know if that is possible. Please let me know if you have better solution

Thanks & Regards,

~Sri

5 REPLIES 5

vida
Contributor II
Contributor II

1) Use sc.parallelize to create the table.

2) Register just a temporary table.

3) You can keep adding insert statements into this table. Note that Spark SQL supports inserting from other tables. So again, you might need to create temporary tables to insert into the first temporary table.

This table should not write out to disk until you run a "saveAsTable" or other type of command.

-Vida

Sri1
New Contributor II

Hi Vida,

Sorry for the late reply.

I tried creating 2 similar temp tables in spark based on a Hive table, 1 had data other one is empty

when I try to insert into empty table I get the below error

org.apache.spark.sql.AnalysisException: Inserting into an RDD-based table is not allowed.;

Please correct me if there are any issues with this approach. I tried exactly same as you said except I created the temp tables out of a Hive table

Regards,

~Sri

vida
Contributor II
Contributor II

Got it - how about using a UnionAll? I believe this code snippet does what you'd want:

from pyspark.sql import Row

array = [Row(value=1), Row(value=2), Row(value=3)] df = sqlContext.createDataFrame(sc.parallelize(array))

array2 = [Row(value=4), Row(value=5), Row(value=6)] df2 = sqlContext.createDataFrame(sc.parallelize(array2))

two_tables = df.unionAll(df2) two_tables.collect()

>> Out[17]: [Row(value=1), Row(value=2), Row(value=3), Row(value=4), Row(value=5), Row(value=6)]

Sri1
New Contributor II

Vida,

Thank you very much for your help

That works good but problem is I have to insert data from multiple queries. I have to declare collection of dataframes to store data from each query at the end I can union all dataframes and insert into an Hive table.

I tried to create collection of dataframe in scala , I am new to scala still struggling

Please let me know the syntax for declaring collection/array of dataframes

Regards,

~Sri

Hi Sri,

It's probably worth your time going through some DataFrame tutorials. Here's a good one from us on the basics of DataFrames. This material should help you get a sense for how you might create a collection of DataFrames and learn a bit more of the scala nuances!

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.