Is there an upper bound of number that i can assign to delta.dataSkippingNumIndexedCols for computing statistics. Is there some tradeoff benchmark available for increasing this number beyond 32.
@daniel_sahal I am facing the same error but i have a multi-tenant application, ie if I set the cluster level config and multiple clients are operating that cluster then I can run into a race condition. Is there a way to not put in the cluster config...
can you share what the *newtitanic* is I think that you would have done something similarspark.sql("create table newtitanic as select * from titanic")something like this works for me, but the issue is i first make a temp view then again create a tab...
Hi @Aviral Bhardwaj​ Thank you for the answer.My question is more about using analyze table command followed by describe extended on the temp view that is created. you are using the right dataset as shared in the ss. I have shared all the sequence ...