06-11-2024 09:40 PM
06-12-2024 12:05 AM
Hi there @Tiwarisk,
if this is the major issue
@Tiwarisk wrote:I am writing a file using this but the data type of columns get changed while reading.
You can explicitly specify your table schema like this
from pyspark.sql.types import StructType, StructField, StringType, IntegerType, DoubleType
schema = StructType([
StructField("column1", StringType(), True),
StructField("column2", IntegerType(), True),
StructField("column3", DoubleType(), True)
])
Then you can read the Excel file like this
// Read the Excel file with the specified schema
val df = spark.read
.format("com.crealytics.spark.excel")
.option("header", "true")
.schema(schema) // Specify the schema here
.load(path)
After this when you write it won't cause trouble because When writing data to an Excel file using the `com.crealytics.spark.excel` format, you might encounter issues where the data types of the columns are altered. This happens because the Excel format doesn't natively support all Spark data types, and the conversion might not be perfect.
@Tiwarisk wrote:df.write.format("com.crealytics.spark.excel").option("header", "true").mode("overwrite").save(path)
06-12-2024 12:22 AM
I checked the library you are using to write to Excel and it seems there is a new version available that has improved data type handling.
https://github.com/crealytics/spark-excel
To use V2 implementation, just change your .format from .format("com.crealytics.spark.excel") to .format("excel").
Check the github readme for details. If your dataframe has the same datatypes as the Excel table, I'm hoping this gives you some more luck.
06-12-2024 03:22 AM
Do you need to write the data again in excel format ? Do you need it in that format ? If yes, while reading the excel file back, are you inferring the schema of the file ?
06-12-2024 10:03 PM
yes inferschema is true
07-10-2024 09:16 AM - edited 07-10-2024 09:16 AM
Hi @Tiwarisk ,
Thank you for reaching out to our community! We're here to help you.To ensure we provide you with the best support, could you please take a moment to review the response and choose the one that best answers your question? Your feedback not only helps us assist you better but also benefits other community members who may have similar questions in the future.If you found the answer helpful, consider giving it a kudo. If the response fully addresses your question, please mark it as the accepted solution. This will help us close the thread and ensure your question is resolved.
We appreciate your participation and are here to assist you further if you need it!"
Thanks,
Rishabh
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group