I had the same issue, and my usage was similar to OP:
base64(aes_encrypt(<clear_text>, unbase64(secret(<scope>, <key>))))
Databricks support suggested to not call secret within the insert/update operation that writes to the table. After updating the python code to first fetch the encryption key from secret, then executing aes_encrypt using expr, the redaction and data corruption went away.
This article gives the overall approach I followed, using a mix of pyspark and sql. Just remember to first retrieve the encryption key, then construct the aes_encrypt call with the key, rather than calling secret from within aes_encrypt.
Hope this helps!