Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
Hello I am trying to read 3000 json files which has only one records. It is taking 2 hours to read all the files . How can I perform this operation faster pls suggest.
You can start off by specyfying schema upfront instead of using infer schema option. But to be honest, it is classical "small file problem". The best approach you can take is to compact those small files into larges ones. Or you can read all them and save them as a parquet files with a proper partition size. Take a look at below threads for inspiration:
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.