Is it reasonable for the process "Determining the location of DBIO file fragments." to take me 7 hours?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-24-2022 07:56 AM
I only have 1000 columns. Each column has 252 rows, so there are only 252000 data points.
How come it can route tasks for the best-cached locality for 7 hours?
- Labels:
-
DBIO File Fragments
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-25-2022 06:50 AM
I tried the following one and it still took more than 10 hours until Fatal error: The Python kernel is unresponsive.
%sql
--Enable Auto Optimization
set spark.databricks.delta.properties.defaults.autoOptimize.optimizeWrite = true;
set spark.databricks.delta.properties.defaults.autoOptimize.autoCompact = true;
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-31-2022 11:52 AM
@Kaniz Fatma Is there any way to shorten the process "Determining the location of DBIO file fragments." runtime?

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-27-2022 06:17 AM
Hi @Cheuk Hin Christophe Poon
Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help.
We'd love to hear from you.
Thanks!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-30-2022 07:01 AM
Hi @Cheuk Hin Christophe Poon have you optimize your table anytime since it's creation? If not, then optimize may take some time depending on the no of underlying files.
Please try to run optimize manually as described in below document:
https://docs.databricks.com/sql/language-manual/delta-optimize.html
If this doesn't help, you can try disabling DBIO cache by setting below in your notebook:
spark.conf.set("spark.databricks.io.cache.enabled", "false")

