โ06-01-2023 02:38 AM
Hi all ! โ๏ธ .
This is my first post here !
I have a problem when I launch a "run all" on my notebook : at a moment (always on the same cell), all the following cells are skipped.
As you can see the command 38 is correctly executed and in the command 40 I have an OK status (when I check the temp view "post_reco_transactions" I have data in it and when I check the used table "...rod_sui.post_reco_transactions_${country}a..." (sorry I prefer to partially hide the path), I have data in it too. No explicit error message.
Of course all the cells with "command skipped" have empty objects and I have no output.
When I manually execute the cells from the #41 it works, I have no error message.
The problem is that with this problem I can not create a job, it will fail for each execution.
Do you have an idea ?
โ06-02-2023 12:35 PM
Hi Valskyyy,
I tried to do the same steps at my end. But it works fine for me even if I try to do Run All. My guess is that there is some other issue happening at the cluster which resets the cluster and hence, the temporary view definition is getting dropped as temporary view are tied to a sparksession.
Could you try putting just the four steps you mentioned in a separate notebook and see if the issue reoproduces then?
Also, could you try creating the view using "df.createOrReplaceTempView" method to check if the issue with sql implementation only?
โ06-01-2023 05:57 AM
Are all the cells after cell 40 skipped or is it just cell 41?
โ06-01-2023 12:47 PM
Hi Lakshay,
Thanks for you answer. All following cells are slipped.
I made more tests and here are the infos about the steps (with generic names of view/path/table).
I create all these steps and make a Run all :
1/ I declare sql parameters, example "country" or "ab_test_id"
2/ I create a table my_database.my_table_${country}_ab_test_id_${ab_test_id}
This table is OK, I can display it when a make a simple :
SELECT * FROM my_database.my_table_${country}_ab_test_id_${ab_test_id}
3/ I have to create a temp view from this table
Just to be sure I make a :
DROP VIEW IF EXISTS my_temp_view
Then I create my temp view :
CREATE OR REPLACE TEMP VIEW my_temp_view AS
SELECT * FROM my_database.my_table_${country}_ab_test_id_${ab_test_id}
>> Status OK, no error message
4/ Then I want to display the temp view :
SELECT * FROM my_temp_view
>> Command skipped (as all the following cells)
If I manually relaunch this cell (SELECT * FROM my_temp_view), the content of the view is displayed, and if I manually execute all the following cells it works.
-------------------------------------
I have tried with a schedule job. I have the same behavior, but I have an explicit error message on the job screen :
[TABLE_OR_VIEW_NOT_FOUND] The table or view my_database.my_table__ab_test_id_ cannot be found. Verify the spelling and correctness of the schema and catalog.
If you did not qualify the name with a schema, verify the current_schema() output, or qualify the name with the correct schema and catalog.
But I don't understand, in the step 2/ I correctly see the content of my table so it is created and it exists with the parameters in the name.
โ06-02-2023 12:35 PM
Hi Valskyyy,
I tried to do the same steps at my end. But it works fine for me even if I try to do Run All. My guess is that there is some other issue happening at the cluster which resets the cluster and hence, the temporary view definition is getting dropped as temporary view are tied to a sparksession.
Could you try putting just the four steps you mentioned in a separate notebook and see if the issue reoproduces then?
Also, could you try creating the view using "df.createOrReplaceTempView" method to check if the issue with sql implementation only?
โ06-05-2023 03:32 PM
Could you also please try with a different cluster DBR version and check?
Please tag @Debayanโ with your next response which will notify me, Thank you!
โ06-09-2023 04:07 AM
Hi @valskyyy valentin.lewandowski.partnerโ,
Hope all is well!
Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help.
We'd love to hear from you.
Thanks!
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group