Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Showing results for 
Search instead for 
Did you mean: 

Pyspark datatype missing microsecond precision last three SSS: h:mm:ss:SSSSSS - datetype

New Contributor II

Hi all,

We are having issues with the datetype data type in spark when ingesting files.

Effectively the source data has 6 microseconds worth of precision but the most we can extract from the datatype is three. For example 12:03:23.123, but what is required is 12:03:23.123456. The source file has this precision but when the file is ingested. Here is an example:"date_col", "yyyy-MM-dd").alias("date"), to_timestamp("timestamp_col", "yyyy-MM-dd HH:mm:ss.SSS").alias("timestamp")).show(truncate=False

|2022-03-16 12:34:56.789|
|2022-03-16 01:23:45.678|

the requirement is for |2022-03-16 12:34:56.456789.

What is the best way to do this?

Many thanks



Community Manager
Community Manager

Hi @jimbo, Handling high-precision timestamps in Spark can be tricky, especially when you need to preserve microsecond-level precision. 


Let’s explore some strategies to achieve your desired timestamp format.


Custom Format String:

  • You’re currently using the format string "yyyy-MM-dd HH:mm:ss.SSS" to parse the timestamp. To retain microsecond precision, you can extend the format string to include milliseconds and microseconds.

Using Unix Timestamps:

  • If you’re dealing with high-precision timestamps, consider using Unix timestamps (seconds since the epoch) instead of formatted strings.

Proleptic Gregorian Calendar:

  • Starting from Spark 3.0, the Proleptic Gregorian calendar is used for internal operations on timestamps. This ensures consistent behaviour across different dates and years.
  • Be aware that some dates that existed in Spark 2.4 may not exist in Spark 3.0 due to differences in calendars. For example, 1000-02-29 is not a valid date in the Gregorian calendar.
  • Ensure that your Spark version is 3.0 or later to benefit from these improvements.

Remember to adjust your approach based on your specific use case and requirements. If you need to handle timestamps with microsecond precision, the custom format string or Unix timestamps should serve you well! 🕰

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.