cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

how to access data objects from different languages [R/SQL/Spark/Python]

fs
New Contributor III

Hi sorry new to Spark, DataBricks. Please could someone summarise options for moving data between these different languages. Esp. interested in R<=>Python options: can see how to do SQL/Spark. Spent a lot of time googling but no result. Presume can use R's reticulate to access python objects..?

Anyway grateful for any idiot-proof links, quick guides, code.

1 ACCEPTED SOLUTION

Accepted Solutions

If you’re using R I highly recommend the sparklyr package from RStudio. Many of the pyspark functions have the same name, for example, “spark.read.table()” is “spark_read_table” in sparklyr. More info here: https://spark.rstudio.com/packages/sparklyr/latest/reference/spark_read_table.html

View solution in original post

12 REPLIES 12

Pholo
Contributor

Hi, you can create a temporary table and then retrieve it with every rogramming language:

ex create in sql:

%sql
CREATE OR REPLACE TEMPORARY VIEW Test1 AS
 SELECT *
 FROM TEST

And then retrieve in python

%python
spark.read.table('Test1')

fs
New Contributor III

Thanks. I took that approach to create view and then queried it using SQL from R:

%r

rd=as.data.frame(sql("select * from CNTRY_FLOWS"))

...not sure if there's a more direct route. I was unsure what the equivalent to python spark.read.table() for R was.

If you’re using R I highly recommend the sparklyr package from RStudio. Many of the pyspark functions have the same name, for example, “spark.read.table()” is “spark_read_table” in sparklyr. More info here: https://spark.rstudio.com/packages/sparklyr/latest/reference/spark_read_table.html

One other thing I thought of— with Spark you want to keep your data in Spark as much as possible and not bring it back to R unless you have too. With Sparklyr you can use many tidyverse functions directly in Spark without having to collect your results and put them in a data frame first. For R functions or packages that don’t have a connection to the Spark API directly you can also use sparklyr::spark_apply to distribute your R code over the cluster and leave your Spark data frames in spark.

fs
New Contributor III

Thanks so much for this. Actually most of my code has been python to date. It was really about knowing how to access objects from one language in the others—e.g. I had some R code to produce a graph that I wanted to recycle.

fs
New Contributor III

I think there's a trick which still hasn't (yet) been achieved in spark. Why can't there be standard syntax to access all objects across all languages it supports (with appropriate data structure translation).

Depending on what you’re doing there is a package called reticulate in R that lets you directly share objects between R and python, Spark not required. https://rstudio.github.io/reticulate/

However I’ve found (so far) it really only works in RStudio, which does run awesomely on Databricks when using the DB ML distributions. You can find RStudio preinstalled on the DB cluster under “apps” on the cluster’s page when you deselect the cluster auto termination for inactivity:

Cluster settings needed to run RStudio on DatabricksRstudio under “Apps” 

There is a bit more set up once in RStudio on DB to make reticulate work flawlessly that I could post if interested.

What I haven’t tested yet is what happens if you make an Rmd using reticulate in RStudio on Databricks and then try to schedule that in a DB Workflow later. If I do I’ll be sure to post about it in the community.

I’d love it if more direct adoption of reticulate was included in Databricks notebooks that extended it to Scala and SQL too, like you’ve suggested. Each language has its advantages in my opinion, and there are some really awesome ML packages in tidymodels for R that get frequently overlooked by the python community in my opinion because the switching between languages is difficult in most places.

fs
New Contributor III

hi thanks for this. Yes I was aware of reticulate, if not its use in databricks. Actually most of my code is python. It's just had an issue getting some libraries to load and had R code for that so wanted to include an R chunk.

Anonymous
Not applicable

@Simone Folino​ & @Matthew Giglia​ Amazing responses, thank you for jumping in and providing your personal expertise in this thread!

@Fernley Symons​ I think we are all eager to hear if these suggestions got you 100% sorted! If so, feel free to choose one of the replies as "best" so the rest of the community knows this question is answered in the future. If not, feel free to let us know what else you need. Thanks!

Noopur_Nigam
Valued Contributor II
Valued Contributor II

Hi @Fernley Symons​ Gentle reminder on the answer provided above. Please let us know if you have more doubts or queries.

fs
New Contributor III

? I've already voted best answer...

Noopur_Nigam
Valued Contributor II
Valued Contributor II

@Fernley Symons​ Thank you for your prompt reply. Apologies, we have just noticed that an answer is already marked as best. Thank you once again.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.