Regarding writing (sink) is possible without problem via foreachBatch .
I use it in production - stream autoload csvs from data lake and writing foreachBatch to SQL (inside foreachBatch function you have temporary dataframe with records and just use write to any jdbc or odbc).
Here is more deltails:
https://docs.databricks.com/spark/latest/structured-streaming/foreach.html
Reading stream from mysql is not the best architecture and officialy is not possible. Theoretically you could create custom reciver but better idea is just put what you save to mysql to kafka or some other broker/queue.
Easy way around (but it make sense only if you have less new records than 1 per second) is to use Azure logic apps - new record in Mysql is trigger and append data to Event hub which is then read as a stream by Databricks.