If the streaming job is making bling appends to the delta table, then it's perfectly fine to run OPTIMIZE query in parallel.
However, if the streaming job is performing MERGE or UPDATE then it can conflict with the OPTIMIZE operations. In such cases within the streaming, custom logic can be written to perform the optimize as part of the streaming job itself. Maybe every 100 batches perform the OPTIMIZE.
Check here for the list of operations:
https://docs.databricks.com/delta/concurrency-control.html#write-conflicts