- 1362 Views
- 1 replies
- 0 kudos
Can Azure Databricks be connected through Microstrategy?
- 1362 Views
- 1 replies
- 0 kudos
Latest Reply
Found this ...Azure Databricks to Microstrategy JDBC/ODBC Setup TipsPurposeThis is a quick reference for common Microstrategy configuration tips, tricks, and common pitfalls when setting up a connection to Databricks:NetworkingFor Azure, we recommend...
- 1134 Views
- 1 replies
- 1 kudos
we are working on IDEs and once code is developed we put the .py file in DBFS and I am uisng that DBFS path to create a job , but I am getting an error dbfs:/artifacts/kg/bootstrap.py. I get the error notebook not found errror.what could be the is...
- 1134 Views
- 1 replies
- 1 kudos
Latest Reply
The actual notebooks that you create are not stored in Data plane but it is stored in but in control plane, you can import the notebooks through import in Databricks UI or using API , The notebook placed in DBFS cannot be used to create a job
- 707 Views
- 1 replies
- 1 kudos
I tried printSchema() of a Dataframe in Databricks. The Dataframe is having more than 1500 columns and apparently the printscheam function is truncating results and displaying only 1000 items. How to see all columns
- 707 Views
- 1 replies
- 1 kudos
Latest Reply
Databricks also shows the schema of the Dataframe when it's created - click on the icon next to the name of variable that holds the dataframeIf you have output of more than limit, then I would imagine outputting the schema into file,
- 294 Views
- 0 replies
- 0 kudos
VM bootstrap and authenticationWhen a VM boots up, it automatically authenticates with Databricks control plane using Managed Identity (MI), a per-VM credential signed by Azure AD. Once authenticated, the VM fetches secrets from the control plane, in...
- 294 Views
- 0 replies
- 0 kudos
- 647 Views
- 1 replies
- 0 kudos
For the optimize command, I can give predicates and it's easy to optimize the partitions where the data is added. Similarly, can I specify the "WHERE" clause on the partition for a VACUUM command
- 647 Views
- 1 replies
- 0 kudos
Latest Reply
It's by design, VACUUM command does not support filters on the partition columns. This is because removing the old files partially can leave can impact the time travel feature.
- 413 Views
- 0 replies
- 0 kudos
Best practices: Hyperparameter tuning with HyperoptBayesian approaches can be much more efficient than grid search and random search. Hence, with the Hyperopt Tree of Parzen Estimators (TPE) algorithm, you can explore more hyperparameters and larger ...
- 413 Views
- 0 replies
- 0 kudos
- 1233 Views
- 1 replies
- 0 kudos
Whenever I restart a Databricks cluster new instances are not launched. This is because Databricks re-uses the instances. However, sometimes it's needed to launch new instances. Some scenarios are to mitigate a bad VM issue or maybe to get a patch fr...
- 1233 Views
- 1 replies
- 0 kudos
Latest Reply
Currently, there is no direct option to restart the cluster with new instances. An easy hack to ensure new instances are launched is to add Cluster tags on your cluster. This will ensure that new instances have to be acquired as it's not possible to ...
- 714 Views
- 1 replies
- 0 kudos
when I start the experiment with mlflow.start_run(),even if my script is interrupted or failed before executing mlflow.end_run() ,the run gets tagged as finished instead of unfinished , Can any one help why it is happening here
- 714 Views
- 1 replies
- 0 kudos
Latest Reply
In note book the mlflow tagas ias the command travels and once failed or exit there itself it logs and finishes the experiment even if the noteboolsfails. However, if you want to continue logging metrics or artifacts to that run, you just need to use...
- 580 Views
- 1 replies
- 0 kudos
I have provided the checkpointLocation as below, however I see the config is ignored for my streaming queryoption("checkpointLocation", "path/to/checkpoint/dir")
- 580 Views
- 1 replies
- 0 kudos
Latest Reply
This is a common question from many users. If the streaming checkpoint directory is specified correctly then this behavior is expected. Below is an example of specifying the checkpoint correctlydf.writeStream
.format("parquet")
.option("checkpo...