Hi @Marcel Baumgartner, It is unusual for the read_xlsx() function from the readxl package to take so long to read a relatively small Excel file. There could be several reasons for this performance issue, including cluster configuration, available resources, or underlying file storage.
Here are some suggestions to diagnose and potentially resolve the performance issue:
- Cluster resources: Check if your cluster has enough resources (CPU, memory) to handle the operation. If your cluster is under high load or has limited resources, it might be causing the performance issue. You can monitor the cluster's resource usage in the Databricks UI under the Clusters tab.
- DBFS latency: DBFS might have higher latency compared to your local file system, causing the slow read times. You can test this hypothesis by copying the file to the local file system of the driver node and then read it using read_xlsx().
Use the following command to copy the file from DBFS to the local file system:
%fs cp dbfs:/path/to/your/excel_file /tmp/local_excel_file
Then, in your R code, read the local file:
library(readxl)
df <- read_xlsx("/tmp/local_excel_file")
- Alternative libraries: If the issue persists, you can try using an alternative library, such as readxlsb or openxlsx, to read the Excel file. This can help identify if the problem is specific to the readxl
package or a more general issue. Keep in mind that you might need to install these libraries if they are not already available on your cluster.
- Convert to CSV: As a last resort, you can convert the Excel file to a CSV format, which is generally faster to read in R. You can do this conversion either locally on your computer or programmatically using Python's pandas library. Once the file is in CSV format, use R's read.csv() or the reader package's read_csv() function to read the file, which should be significantly faster.
Please note that R in Databricks runs inside a container, and its performance might be different from your local R installation. Also, keep in mind that the performance of the R environment in Databricks might not be as optimized as the PySpark environment, which is native to Databricks. If the performance issue persists, consider using PySpark to read and process the data.