ā10-06-2022 08:13 AM
Hello,
I am getting inconsistent representation of long types.
1661817599972 is the unix timestamp in milliseconds for Monday, August 29, 2022 11:59:59.972 PM GMT
when I execute:
`select 1661817599972 as t`
the result is:
166181759997 (last digit truncated)
The number is correctly cast as a bigint according to `describe select 1661817599972 as t`.
However when I execute:
select timestamp_millis(1661817599972) as t
the result is (correctly) `2022-08-29T23:59:59.972+0000`. So the internal representation of the object seems correct. This appears to happen for bigints only. For instance:
`select 16618175678 as t`
yields:
1661817567
but
`select 1661817567 as t`
also yields
1661817567
DBR 11.2
Thank you,
Cosimo.
PS: displays correctly in DBSQL
ā10-14-2022 05:33 AM
ā10-14-2022 05:46 AM
ā10-14-2022 05:49 AM
ā10-20-2022 05:22 AM
Hi, I tried again on another cluster but still working ok. Maybe try to change the runtime version. Additionally, you can check in Spark UI -> Environment -> Spark properties. Perhaps you have there some options set that change that behavior. I tried to find now in the options something related but couldn't.
ā10-25-2022 04:27 PM
Thank you both for following up, unfortunately I'm still having the issue. Seems like a notebook frontend issue to me...
@Hubert Dudekā thanks for the suggestion, I will take a look.
Best,
Cosimo.
ā10-25-2022 04:33 PM
ā11-14-2022 12:58 AM
Hi @Cosimo Fellineā
Hope all is well!
Does @Hubert Dudek (Customer)ā response answer your question?
If you could resolve your issue, would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help.
We'd love to hear from you.
Thanks!
Passionate about hosting events and connecting people? Help us grow a vibrant local communityāsign up today to get started!
Sign Up Now