One approach could involve using Azure Data Lake as an intermediary. You can partition your PySpark DataFrames and load them into Azure Data Lake, which is optimized for large-scale data storage and integrates well with PySpark. Once the data is in Azure Data Lake, you can then use either Azure Logic Apps or Power Automate to automate the transfer of partitioned files from Azure Data Lake into SharePoint. This way, you avoid directly dealing with SharePoint's file size limits during the initial load.
Alternatively, you might consider leveraging the SharePoint REST API, which supports chunked uploads. This method allows you to partition your PySpark DataFrame into smaller files and upload them sequentially. The REST API would give you more control over the upload process compared to Logic Apps, and you wouldnโt have to worry about filename restrictions since youโd be handling file uploads programmatically.