In Spark SQL, you could use commands like "insert overwrite directory" that indirectly creates a temporary file with the data
https://docs.databricks.com/spark/latest/spark-sql/language-manual/sql-ref-syntax-dml-insert-overwri...
Using shell you could create directory using
%sh mkdir /tmp/file
& create temp file using
echo 'This is a test' > data.txt
If you have more questions, please let us know what is the use case are you trying to do