if you use the spark json reader, it will happen in parallel automatically.
Depending on the cluster size, you will be able to read more files in parallel.
Mind that json usually are small files. Spark does not like a lot of small files, so performance may suffer.
Depending on the use case it can be a good idea to do an initial conversion to parquet/delta lake (which will take some time because of multiple small files), and then keep on adding new files to this table.
For your data jobs, you can read the parquet/delta lake which will be a lot faster.