You could run in-place convert with
CONVERT TO DELTA parquet.`/data-pipeline/`
CREATE TABLE events USING DELTA LOCATION '/data-pipeline/'
This command lists all objects in the path, infers the data schema by reading the footers of all parquet files and creates the necessary metadata / statistics. This would be the least disruptive approach as you are not copying data over.
If you have multiple external tables share the same underlying parquet directory, you'd need to run convert for all of them