Assuming that the S3 bucket is mounted in the workspace you can provide a file path.
If you want to write a PySpark DF then you can do something like the following:
df.write.format('json').save('/path/to/file_name.json')
You could also use the json python library but this would be non-pyspark code.