- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-11-2021 03:34 AM
Hi,
I have multiple datasets in my data lake that feature valid_from and valid_to columns indicating validity of rows.
If a row is valid currently, this is indicated by valid_to=9999-12-31 00:00:00.
Example:
Loading this into a Spark dataframe works fine (Spark has no issue with timestamp 9999-12-31).
However, for analysis and visualization purpose, I would like to do further processing with Pandas instead of Spark. But when trying to convert the dataframe to Pandas an error occurs:
ArrowInvalid: Casting from timestamp[us, tz=Etc/UTC] to timestamp[ns] would result in out of bounds timestamp: 253379592300000000
Code for simulating the issue:
import datetime
import pandas as pd
df_spark_native = sc.parallelize([
[1, 'Alice', datetime.date(1985, 4, 13), datetime.datetime(1985, 4, 13, 4,5)],
[2, 'Bob', datetime.date(9999, 1, 20), datetime.datetime(9999, 4, 13, 4,5)],
[3, 'Eve', datetime.date(1500, 1, 20), datetime.datetime(1500, 4, 13, 4,5)],
[3, 'Dave', datetime.date( 1, 1, 20), datetime.datetime( 1, 4, 13, 4,5)]
]).toDF(('ID', 'Some_Text', 'Some_Date', 'Some_Timestamp'))
display( df_spark_native )
df_spark_native.printSchema()
df_spark_to_pandas = df_spark_native.toPandas()
display( df_spark_to_pandas )
To me, it appears, that under the hood, spark uses pyarrow to convert the dataframe to pandas.
Pyarrow already has some functionality for handling dates and timestamps that would otherwise cause out of range issue: parameter "timestamp_as_object" and "date_as_object" of pyarrow.Table.to_pandas(). However, Spark.toPandas() currently does not allow passing down parameters to pyarrow.
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-06-2021 07:42 AM
Currently, out of bound timestamps are not supported in pyArrow/pandas. Please refer to the below associated JIRA issue.
![](/skins/images/F150478535D6FB5A5FF0311D4528FC89/responsive_peak/images/icon_anonymous_profile.png)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-11-2021 01:15 PM
Hello @Martin B.. It's nice to meet you. I'm Piper, one of the community moderators here. Thank you for your question and I'm sorry to hear about the issue. If no one comments soon, please be patient. The team will be back on Monday.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-28-2021 08:58 AM
Hi @Piper Wilson , can the team help?
![](/skins/images/F150478535D6FB5A5FF0311D4528FC89/responsive_peak/images/icon_anonymous_profile.png)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-29-2021 08:46 AM
@Martin B. - I apologize for my delayed response. I've pinged the team again. Thanks for your patience.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-06-2021 07:42 AM
Currently, out of bound timestamps are not supported in pyArrow/pandas. Please refer to the below associated JIRA issue.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago
Be aware, that in Databricks 15.2 LTS this behavior is broken.
I cannot find the code, but most likely related to the following option:
https://github.com/apache/spark/commit/c1c710e7da75b989f4d14e84e85f336bc10920e0#diff-f9ddcc6cba651c6...
I was able to reproduce the issue locally when the latest pyarrow is installed, with this option enabled.
![](/skins/images/582998B45490C7019731A5B3A872C751/responsive_peak/images/icon_anonymous_message.png)
![](/skins/images/582998B45490C7019731A5B3A872C751/responsive_peak/images/icon_anonymous_message.png)