โ11-21-2024 11:50 AM
I have giant queries (SELECT.. FROM) that i store in .sql files. I want to put those files in the Volume, and run the queries from a workflow task.
I can load the file content into a 'text' format string, then run the query. My question is, is there another option, where I don't need to load the file content, but directly execute the .sql file and store the results in a Dataframe?
โ11-21-2024 10:11 PM
Hi @lauraxyz , We can load the SQL file using dbutils.fs from volumes and then we can create a dataframe using spark.sql()
Example:
โ11-22-2024 10:33 AM
Thanks Jahnavi! That's what i'm doing now, was wondering if there's a way that I don't need to parse the content of the file but directly execute it. Another example is if i have a python notebook in Volume, and I want to directly execute this notebook without parsing the content, would I be able to do that?
โ11-22-2024 11:27 PM
@lauraxyz For SQL there is no direct way to run the file without parsing it. However, for Python, we can use %run to run the file from volumes.
Example:
โ11-27-2024 09:26 AM
Thank you @JAHNAVI
How about Python Notebooks? can we directly run .ipynb files?
a month ago
Hi @lauraxyz , Good Day!
We can run the below command to un .ipynb files
a month ago
Thanks gonna give it a try!
a month ago
.ipynb cannot be applied due to ModuleNotFoundError: No module named 'nbformat'.
.py command seemed passed but the insertion was never executed, therefore it's a silent fail.
a month ago
issue resolved:
for .py, i was using spark, and I have to explicitly create the spark session so that it can be run properly and insert data.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group