I doubt Redshift supports streaming writes from Spark.
You could write the changes in the delta table on Databricks (change data feed) into Kafka/Kinesis and consume that in Redshift. Or overwrite the table. Or use a merge, but then you need to know what has to be sent to redshift (aka you need to know what was changed/deleted/inserted)