by
Gapy
• New Contributor II
- 1013 Views
- 1 replies
- 1 kudos
Dear all,will (and when) will Auto Loader also support Schema-Inference and Evolution for parquet files, at this point it is only for JSON and CSV supported if i am not mistaken?Thanks and regards,Gapy
- 1013 Views
- 1 replies
- 1 kudos
Latest Reply
@Gasper Zerak​ , This will be available in near future (DBR 10.3 or later). Unfortunately, we don't have an SLA at this moment.
- 2116 Views
- 10 replies
- 9 kudos
If there is a registered model and it is linked with a notebook, then the lineage breaks if you move the notebook to a different path or even pull/upload a new version of the notebook.This is not good because when someone doing its development/testin...
- 2116 Views
- 10 replies
- 9 kudos
Latest Reply
I also cannot reproduce this, with these exact steps (I think). After moving the notebook and moving it back, the link to it (and link to the revision) still works as expected. You are using MLflow built in to Databricks right?
9 More Replies
by
RantoB
• Valued Contributor
- 5244 Views
- 3 replies
- 3 kudos
Hi,I was using the following command to import variables and functions from an other notebook :%run ./utilsFor some reason it is not working any more and gives me this message :Exception: File `'./utils.py'` not found.utils.py is still at the same pl...
- 5244 Views
- 3 replies
- 3 kudos
Latest Reply
Finally I just solved my issue.Actually, in the same cell I wrote a comment starting with # and it was not working because of that...
2 More Replies
- 687 Views
- 2 replies
- 4 kudos
Enabling of Task Orchestration feature in Jobs via API as wellDatabricks supports the ability to orchestrate multiple tasks within a job. You must enable this feature in the admin console. Once enabled, this feature cannot be disabled. To enable orch...
- 687 Views
- 2 replies
- 4 kudos
Latest Reply
Thank you @Mohit Miglani​ for this amazing post.
1 More Replies
- 1782 Views
- 5 replies
- 3 kudos
Can anyone tell me how I can access the customer_t1 dataset that is referenced in the book "Delta Lake - The Definitive Guide "? I am trying to follow along with one of the examples.
- 1782 Views
- 5 replies
- 3 kudos
Latest Reply
Some files are visualized here https://github.com/vinijaiswal/delta_time_travel/blob/main/Delta%20Time%20Travel.ipynb but it is quite strange that there is no source in repository. I think only one way is to write to Vini Jaiswal on github.
4 More Replies
- 1879 Views
- 2 replies
- 2 kudos
The below code executes a 'get' api method to retrieve objects from s3 and write to the data lake.The problem arises when I use dbutils.secrets.get to get the keys required to establish the connection to s3my_dataframe.rdd.foreachPartition(partition ...
- 1879 Views
- 2 replies
- 2 kudos
Latest Reply
Hi @Sandesh Puligundla​ , You just need to move the following two lines:val AccessKey = dbutils.secrets.get(scope = "ADB_Scope", key = "AccessKey-ID")
val SecretKey = dbutils.secrets.get(scope = "ADB_Scope", key = "AccessKey-Secret")Outside of the fo...
1 More Replies
- 381 Views
- 1 replies
- 2 kudos
Are EBS volumes used by Databricks Clusters are encrypted especially the root volumes
- 381 Views
- 1 replies
- 2 kudos
Latest Reply
Yes these EBS volumes are encrypted. Earlier root volume encryptions were not supported but recently this encryption is also enabled (since Apr, 2021)please find more details on the below docs pagehttps://docs.databricks.com/clusters/configure.html#e...
- 2764 Views
- 6 replies
- 5 kudos
Why does /dbfs seem to be empty in my Databricks cluster ?If I run %sh ls /dbfsI get no output.I am looking for the databricks-datasets subdirectory ? I can't find it under /dbfs
- 2764 Views
- 6 replies
- 5 kudos
- 1067 Views
- 3 replies
- 2 kudos
Objective:- Retrieve objects from an S3 bucket using a 'get' api call, write the retrieved object to azure datalake and in case of errors like 404s (object not found) write the error message to cosmos DB"my_dataframe" consists of the a column (s3Obje...
- 1067 Views
- 3 replies
- 2 kudos
Latest Reply
Hi @Sandesh Puligundla​ issue is that you are using spark context inside foreachpartition. You can create a dataframe only on the spark driver. Few stack overflow references https://stackoverflow.com/questions/46964250/nullpointerexception-creatin...
2 More Replies
by
SEOCO
• New Contributor II
- 1691 Views
- 3 replies
- 3 kudos
Hi,This is all a bit new to me.Does anybody have any idea how to pass a parameter to the Databricks notebook.I have a DevOps pipeline/release that moves my databricks notebooks towards QA and Production environment. The only problem I am facing is th...
- 1691 Views
- 3 replies
- 3 kudos
Latest Reply
@Mario Walle​ - If @Hubert Dudek​'s answer solved the issue, would you be happy to mark his answer as best so that it will be more visible to other members?
2 More Replies
- 2160 Views
- 2 replies
- 4 kudos
I am new to Databricks community edition. I was following the quckstart guide and running through basic cluster management - create, start, etc. For whatever reason, I cannot restart an e3xisting cluster. There is nothing in the cluster event logs or...
- 2160 Views
- 2 replies
- 4 kudos
Latest Reply
Hi @ Jeff Luecht,Please refresh the event logs. You can clone your cluster.As a Community Edition user, your cluster will automatically terminate after an idle period of two hours.For more configuration options, please upgrade your Databricks subscri...
1 More Replies
by
Erik
• Valued Contributor II
- 1639 Views
- 6 replies
- 2 kudos
Situation: we have one partion per date, and it just so happens that each partition ends up (after optimize) as *a single* 128mb file. We partition on date, and zorder on userid, and our query is something like "find max value of column A where useri...
- 1639 Views
- 6 replies
- 2 kudos
Latest Reply
Z-Order will make sure that in case you need to read multiple files, these files are co-located.For a single file this does not matter as a single file is always local to itself.If you are certain that your spark program will only read a single file,...
5 More Replies
- 1580 Views
- 5 replies
- 0 kudos
I am searching for the Databricks JDBC 2.6.19 documentation page. I can find release notes from the Databricks download page (https://databricks-bi-artifacts.s3.us-east-2.amazonaws.com/simbaspark-drivers/jdbc/2.6.19/docs/release-notes.txt) but on Mag...
- 1580 Views
- 5 replies
- 0 kudos
Latest Reply
By the way what is still wild, is that the Simba docs say 2.6.16 does only support until Spark 2.4 while the release notes on Databricks download page say 2.6.16 already supports Spark 3.0. Strange that we get contradicting info from the actual driv...
4 More Replies
- 259 Views
- 0 replies
- 0 kudos
This Vigor Now male improvement pill contains still up in the air trimmings that together work on working on your overall prosperity by boosting the levels and production of testosterone in your body. Such extended testosterone creation can certainly...
- 259 Views
- 0 replies
- 0 kudos
by
Daniel
• New Contributor III
- 3793 Views
- 11 replies
- 6 kudos
Hello guys, can someone help me?Autocomplete parentheses, quotation marks, brackets and square stopped working in python notebooks.How can I fix this?Daniel
- 3793 Views
- 11 replies
- 6 kudos
Latest Reply
@Piper Wilson​ , @Werner Stinckens​ Thank you so much for your help.I made the suggestion of the @Jose Gonzalez​ and now it works.
10 More Replies