- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-06-2024 10:54 PM
After downloading a file using `wget`, I'm attempting to read it by spark.read.json.
Attaching relevant screenshots here.
Could someone please help me identify any errors or issues in my approach?
Thank you.
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-07-2024 09:49 AM
Hi,
Perform a test with the code below
Code layout:
dbutils.fs.cp("file:///<file source directory>/<your file extension>", "dbfs:/<file destination directory>/", True)
Thomaz Antonio Rossito Neto
Master Data Specialist - Data Architect | Data Engineer @ CI&T
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-07-2024 09:49 AM
Hi,
Perform a test with the code below
Code layout:
dbutils.fs.cp("file:///<file source directory>/<your file extension>", "dbfs:/<file destination directory>/", True)
Thomaz Antonio Rossito Neto
Master Data Specialist - Data Architect | Data Engineer @ CI&T
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-09-2024 08:45 AM
Thank you. It worked!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-07-2024 11:02 PM
Hi @sharma_kamal , Good Day!
Could you please try the below code suggested by @ThomazRossito , it will help you.
Also please refer to the below document to work with the files on Databricks:
https://docs.databricks.com/en/files/index.html
Please let me know if this helps and leave a like if you find this information useful. Follow-ups are appreciated.
Kudos
Ayushi

