- 406 Views
- 0 replies
- 1 kudos
I have a large delta table that I need to analyze in native R. The only option I have currently is to query the delta table then use collect() to bring that spark dataframe into an R dataframe. Is there an alternative method that would allow me to qu...
- 406 Views
- 0 replies
- 1 kudos
- 1333 Views
- 4 replies
- 4 kudos
Recently I ran into a number issues running with our notebooks in Interactive Mode. For example, we can't create (delta) table. The command would run and then idle for no apparent exception. The path is created on AWS S3 but delta log is never create...
- 1333 Views
- 4 replies
- 4 kudos
Latest Reply
The Admin can disable the possibility to use the no Isolate Shared cluster. I recommend you to switch to Single user where UC is activated. Don't worry you won't need to change your code. If you encounter this kind of issues, make sure to open a tick...
3 More Replies
by
Hunter
• New Contributor III
- 6873 Views
- 9 replies
- 8 kudos
I am creating plots in databricks using python and matplotlib. These look great in notebook and I can save them to the dbfs usingplt.savefig("/dbfs/FileStore/tables/[plot_name].png")I can then download the png files to my computer individually by pas...
- 6873 Views
- 9 replies
- 8 kudos
Latest Reply
Thanks everyone! I am already at a place where I can download a png to FileStore and use a url to download that file locally. What I was wondering was if there is some databricks function I can use to launch the url that references the png file and d...
8 More Replies
- 1029 Views
- 4 replies
- 3 kudos
As a DevOps engineer, I want to enforce cluster policies at deployment time when the job is deployed/created, well before it is time to actually use it (i.e. before its scheduled/triggered run time without actually running it).
- 1029 Views
- 4 replies
- 3 kudos
Latest Reply
Hi @Nathan Hawk​, We haven’t heard from you since the last response from @nafri A​, and I was checking back to see if his suggestions helped you.Or else, If you have any solution, please share it with the community, as it can be helpful to others.Als...
3 More Replies
- 914 Views
- 3 replies
- 20 kudos
Hi team,I wonder if we can create a Databricks Workspace that not releated with Azure email address.Thanks
- 914 Views
- 3 replies
- 20 kudos
- 920 Views
- 4 replies
- 14 kudos
I took a certified of DE exam (version 2). Do I receive a new badge or certified when I pass newest version of DE exam?I'm going to take that and review my knowlege.
- 920 Views
- 4 replies
- 14 kudos
Latest Reply
Hi @Gam Nguyen​, Thank you for reaching out!Let us look into this for you, and we'll circle back with an update.
3 More Replies
- 444 Views
- 0 replies
- 1 kudos
I have a multi-task job that runs everyday where the first notebook in the job checks if the run should be continued based on the date that the job is run. The majority of the time the answer to that is no and I'm raising an exception for the job to ...
- 444 Views
- 0 replies
- 1 kudos
- 1039 Views
- 2 replies
- 1 kudos
Goal is to try to revoke SELECT permissions from a user for a table in Data Explorer in the SQL Workspace.I've tried navigating to the Permissions tab of the tab in Data Explorer.Initially the Revoke button is greyed out and only the Grant button is ...
- 1039 Views
- 2 replies
- 1 kudos
Latest Reply
Hi @Christian Seberino​, We haven’t heard from you since the last response from @Ajay Pandey​ , and I was checking back to see if his suggestions helped you.Or else, If you have any solution, please share it with the community, as it can be helpful t...
1 More Replies
by
Harun
• Honored Contributor
- 594 Views
- 1 replies
- 1 kudos
Hi Community members and Databricks Officials,Now a days i am seeing lot of spam post in our groups and discussions. Forum admins and databricks officials please take action on the users who are spamming the timeline with some promotional contents.As...
- 594 Views
- 1 replies
- 1 kudos
Latest Reply
Yes @Databricks Forum Admin​ please take an action on this
- 1493 Views
- 2 replies
- 3 kudos
I have lot of tables with 80% of columns being filled with nulls. I understand SQL sever provides a way to handle these kind of data during the data definition of the tables (with Sparse keyword). Do datalake provide similar kind of thing?
- 1493 Views
- 2 replies
- 3 kudos
Latest Reply
datalake itself not, but the file format you use to store data does.f.e. parquet uses column compression, so sparse data will compress pretty good.csv on the other hand: total disaster
1 More Replies
- 4424 Views
- 5 replies
- 3 kudos
We have a scenario where ideally we'd like to use Managed Identities to access storage but also secrets. Per now we have a setup with service principals accessing secrets through secret scopes, but we foresee a situation where we may get many service...
- 4424 Views
- 5 replies
- 3 kudos
Latest Reply
grive
New Contributor III
I have unofficial word that this is not supported, and docs don't mention it. I have the feeling that even if I got it to work it should not be trusted for now.
4 More Replies
- 3033 Views
- 4 replies
- 4 kudos
or Databricks only charges you whenever you are actually running the cluster, no matter how long you keep the cluster idle?Thanks!
- 3033 Views
- 4 replies
- 4 kudos
Latest Reply
If you not congifure your cluster auto terminate after period of idle time, yes you will be charged for that.
3 More Replies
- 1514 Views
- 4 replies
- 3 kudos
We need to be able to import a custom certificate (https://learn.microsoft.com/en-us/azure/databricks/kb/python/import-custom-ca-cert) in the same way as in the "data engineering" module but in the Databricks SQL module
- 1514 Views
- 4 replies
- 3 kudos
Latest Reply
You can try downloading it to DBFS and may be accessing it from there if you use case really needs that.
3 More Replies
- 4886 Views
- 16 replies
- 5 kudos
Using DBR 10.0When calling toPandas() the worker fails with IndexOutOfBoundsException. It seems like ArrowWriter.sizeInBytes (which looks like a proprietary method since I can't find it in OSS) calls arrow's getBufferSizeFor which fails with this err...
- 4886 Views
- 16 replies
- 5 kudos
Latest Reply
I am also facing the same issue, I have applied the config: `spark.sql.execution.arrow.pyspark.enabled` set to `false`, but still facing the same issue. Any Idea, what's going on???. Please help me out....org.apache.spark.SparkException: Job aborted ...
15 More Replies
by
Soma
• Valued Contributor
- 1585 Views
- 5 replies
- 1 kudos
Hi All,I am looking for some options to add the Client side encryption feature of azure to store data in adls gen2https://learn.microsoft.com/en-us/azure/storage/blobs/client-side-encryption?tabs=javaAny help will be highly appreciatedNote: Fernet si...
- 1585 Views
- 5 replies
- 1 kudos
Latest Reply
@Vidula Khanna​ We are going with fernet encryption as direct method is not available
4 More Replies