cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Pandas API on Spark, Does it run on a multi-node cluster?

Mado
Valued Contributor II

Hi,

I have a few questions about "Pandas API on Spark". Thanks for your time to read my questions

1) Input to these functions are Pandas DataFrame or PySpark DataFrame?

2) When I use any pandas function (like isna, size, apply, where, etc ), does it run only on one node or multi nodes?

Thanks.

1 ACCEPTED SOLUTION

Accepted Solutions

I would like to share the following information, that might help you.

Pandas API on Spark fills this gap by providing pandas equivalent APIs that work on Apache Spark. Pandas API on Spark is useful not only for pandas users but also PySpark users, because pandas API on Spark supports many tasks that are difficult to do with PySpark, for example plotting data directly from a PySpark DataFrame. Doc https://docs.databricks.com/_static/notebooks/pandas-to-pandas-api-on-spark-in-10-minutes.html

View solution in original post

4 REPLIES 4

Debayan
Esteemed Contributor III
Esteemed Contributor III

Hi @Mohammad Saber​ ,

Pandas dataset lives in the single machine, and is naturally iterable locally within the same machine. However, pandas-on-Spark dataset lives across multiple machines, and they are computed in a distributed manner. It is difficult to be locally iterable and it is very likely users collect the entire data into the client side without knowing it. Therefore, it is best to stick to using pandas-on-Spark APIs.

Please refer:

https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/best_practices.html#use-p...

https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/index.html

https://docs.databricks.com/languages/pandas-spark.html

Please let us know if you need further clarification on the same. We are more than happy to assist you further.

Mado
Valued Contributor II

@Debayan Mukherjee​ 

Thanks for your help.

I have a question about terms : "Pandas dataset" and "pandas-on-Spark dataset".

When you say "dataset", does it refer to "DataFrame"?

If I create "pandas-on-Spark dataset ", can I apply Pandas functions on it, or I should convert it to "pandas dataset" before such a computation?

If I need to convert it to "pandas dataset", I think computation will be done on a single node. Is it correct?

I would like to share the following information, that might help you.

Pandas API on Spark fills this gap by providing pandas equivalent APIs that work on Apache Spark. Pandas API on Spark is useful not only for pandas users but also PySpark users, because pandas API on Spark supports many tasks that are difficult to do with PySpark, for example plotting data directly from a PySpark DataFrame. Doc https://docs.databricks.com/_static/notebooks/pandas-to-pandas-api-on-spark-in-10-minutes.html

Mado
Valued Contributor II

Thanks for your reply.

I just want to confirm that Pandas API on Spark uses the parallelism capability of Spark (computations on multi nodes).

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!