02-04-2024 01:26 PM
I first create a catalog and schema and ingest some data into it as follows:
02-06-2024 07:29 PM
My issue was due to the fact that I have over 300 columns and due to datatype mismatches, the rows were actually written to the table, but values were all null. That's why I didnt get any errors. I am doing manual datatype mapping now and I am able to get autoloader working and appending and am not dropping the table. Thanks everyone.
02-04-2024 08:39 PM
Hi @OLAPTrader, The issue you’re experiencing is likely because when you drop a table in Spark, it deletes the associated metadata. This includes the schema and any other information Spark uses to read the data. When you then try to append new data to the table, Spark doesn’t know how to interpret it because the schema information is missing.
In your case, you’re using the append mode in your write operation, which means Spark is trying to append the new data to the existing table. If the table doesn’t exist (because it was dropped), Spark will try to create a new table.
However, since the schema information is missing, it doesn’t know how to make the table correctly; thus, no data is ingested.
To resolve this issue, you could consider the following options:
Avoid dropping the table: Since you’re constantly ingesting new data, there’s no need to drop the table. Just append the new data to the existing table.
Recreate the schema: If you need to drop the table, recreate the schema before ingesting new data. You can do this by defining the schema in your code and applying it when you create the table.
Use overwrite mode cautiously: This mode will overwrite the existing data in the table with the new data. Be careful with this mode, as it will delete all existing data in the table. It’s generally not recommended for scenarios where you want to keep historical data.
02-05-2024 05:27 AM
Thank you for your reply. When I uncomment the line which drops the table, this is when it actually works. I get the most recent csv ingested. (plus older ones I assume). This is not what I want to do. so, in short, when I do not drop the table, nothing works (no errors but no recent data ingested). I think you may have misread the question.
02-05-2024 11:51 PM
Hi @OLAPTrader, Thank you for sharing the details!
When you drop the table and recreate it each time, it indeed has some downsides, especially as the data volume grows. The process of recreating the table involves reading all the existing data from the bucket and ingesting it again, which can become time-consuming and resource-intensive.
To avoid this, you can modify your approach. Instead of dropping and recreating the table, consider the following steps:
Initial Setup:
Incremental Data Ingestion:
Periodic Optimization:
Feel free to adjust the frequency of the VACUUM operation based on your specific use case and data growth. If you have any further questions or need additional assistance, feel free to ask! 😊
02-06-2024 07:39 AM
Thanks, I am not dropping the table and I do use append mode. However, the only time the autoloader works if if I do indeed drop the table. I was doing that just for testing. When I comment that line, I never get new files ingested into the table.
02-06-2024 07:29 PM
My issue was due to the fact that I have over 300 columns and due to datatype mismatches, the rows were actually written to the table, but values were all null. That's why I didnt get any errors. I am doing manual datatype mapping now and I am able to get autoloader working and appending and am not dropping the table. Thanks everyone.