cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Ajay-Pandey
by Esteemed Contributor III
  • 701 Views
  • 3 replies
  • 7 kudos

docs.databricks.com

Rename and drop columns with Delta Lake column mapping. Hi all,Now databricks started supporting column rename and drop.Column mapping requires the following Delta protocols:Reader version 2 or above.Writer version 5 or above.Blog URL##Available in D...

  • 701 Views
  • 3 replies
  • 7 kudos
Latest Reply
Poovarasan
New Contributor II
  • 7 kudos

Above mentioned feature is not working in the DLT pipeline. if the scrip has more than 4 columns 

  • 7 kudos
2 More Replies
ranged_coop
by Valued Contributor II
  • 8172 Views
  • 24 replies
  • 29 kudos

How to install Chromium Browser and Chrome Driver on DBX runtime 10.4 and above ?

Hi Team,We are wondering if there is a recommended way to install the chromium browser and chrome driver on Databricks Runtime 10.4 and above ?I have been through the site and have come across several links to this effect, but they all seem to be ins...

  • 8172 Views
  • 24 replies
  • 29 kudos
Latest Reply
Kaizen
Contributor III
  • 29 kudos

Look into Playwrite instead of Selenium. I went through the same process y'all went through here (ended up writing a init script to install the drivers etc)This is all done for you in playwright. Refer to this post - I hope it helps!!https://communit...

  • 29 kudos
23 More Replies
qwerty1
by Contributor
  • 2333 Views
  • 4 replies
  • 13 kudos

Resolved! When will databricks runtime be released for Scala 2.13?

I see that spark fully supports Scala 2.13. I wonder why is there no databricks runtime with Scala 2.13 yet. Any plans on making this available? It would be super useful.

  • 2333 Views
  • 4 replies
  • 13 kudos
Latest Reply
source2sea
Contributor
  • 13 kudos

I see db runtime 14 is out, but still 2.12, when would databricks plan to support 2.13 or 3  thank you

  • 13 kudos
3 More Replies
Gary_Irick
by New Contributor III
  • 4228 Views
  • 9 replies
  • 12 kudos

Delta table partition directories when column mapping is enabled

I recently created a table on a cluster in Azure running Databricks Runtime 11.1. The table is partitioned by a "date" column. I enabled column mapping, like this:ALTER TABLE {schema}.{table_name} SET TBLPROPERTIES('delta.columnMapping.mode' = 'nam...

  • 4228 Views
  • 9 replies
  • 12 kudos
Latest Reply
Kaniz
Community Manager
  • 12 kudos

Hi @Gary_Irick, @gongasxavi , @Pete_Cotton , @aleks1601 ,    Certainly, let’s address your questions regarding Delta table partition directories and column mapping.   Directory Names with Column Mapping: When you enable column mapping in a Delta tabl...

  • 12 kudos
8 More Replies
azera
by New Contributor II
  • 779 Views
  • 2 replies
  • 2 kudos

Stream-stream window join after time window aggregation not working in 13.1

Hey,I'm trying to perform Time window aggregation in two different streams followed by stream-stream window join described here. I'm running Databricks Runtime 13.1, exactly as advised.However, when I'm reproducing the following code:clicksWindow = c...

  • 779 Views
  • 2 replies
  • 2 kudos
Latest Reply
Happyfield7
New Contributor II
  • 2 kudos

Hey,I'm currently facing the same problem, so I would to know if you've made any progress in resolving this issue.

  • 2 kudos
1 More Replies
sevvalmehder
by New Contributor II
  • 1152 Views
  • 3 replies
  • 3 kudos

Databricks run-time 12.2 LTS drop function problem

I am getting an error about the `drop function of pyspark` at a cluster using 12.2 LTS. When I check the error I see spark solved that bug, see SPARK-42444. Also when I check maintenance updates page, I saw this solved issue included the Databricks R...

image.png
  • 1152 Views
  • 3 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

Hi @Sevval Mehder​ Elevate our community by acknowledging exceptional contributions. Your participation in marking the best answers is a testament to our collective pursuit of knowledge.

  • 3 kudos
2 More Replies
darkraisisi
by New Contributor
  • 446 Views
  • 0 replies
  • 0 kudos

Is there a way to manually update the cuda required file in the db runtime? There are some rather annoying bugs still in TF 2.11 that have been fixed ...

Is there a way to manually update the cuda required file in the db runtime?There are some rather annoying bugs still in TF 2.11 that have been fixed in TF 2.12.Sadly the latest DB runtime 13.1 (beta) only supports the older TF 2.11 even tho 2.12 was ...

  • 446 Views
  • 0 replies
  • 0 kudos
grazie
by Contributor
  • 1104 Views
  • 2 replies
  • 2 kudos

how to get dbutils in Runtime 13

We're using the following method (generated by using dbx) to access dbutils, e.g. to retrieve parameters from secret scopes: @staticmethod def _get_dbutils(spark: SparkSession) -> "dbutils": try: from pyspark.dbutils import...

  • 1104 Views
  • 2 replies
  • 2 kudos
Latest Reply
colt
New Contributor III
  • 2 kudos

We have something similar in our code. This worked using runtime 13 until last week. Also the Machine Learning DBR doesn't work either.

  • 2 kudos
1 More Replies
Thanapat_S
by Contributor
  • 1259 Views
  • 2 replies
  • 0 kudos

Resolved! Table access control is deprecated in Databricks Runtime for Machine Learning

After reviewing this Deprecations, I discovered that Table Access Control is not supported in Databricks Runtime for Machine Learning.I want to understand why table access control is not designed for ML runtime. Is there any reason behind this?

image.png
  • 1259 Views
  • 2 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

@Thanapat Sontayasara​ Table Access Control (TAC) is a feature in Databricks that allows you to restrict access to specific tables in your workspace based on user or group identity.According to the Databricks documentation, TAC is not supported in th...

  • 0 kudos
1 More Replies
AyushModi038
by New Contributor III
  • 4240 Views
  • 2 replies
  • 1 kudos

Resolved! Upgrade Python version in cluster

Currently I am using the following cluster. It is using the default python version of 3.9.5 and I would like to update it to 3.10.1.0How to achieve this?

image
  • 4240 Views
  • 2 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hi @Ayush Modi​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers yo...

  • 1 kudos
1 More Replies
Data_Engineer3
by Contributor II
  • 3425 Views
  • 4 replies
  • 5 kudos

How can i use the same spark session from onenotebook to another notebook in databricks

I want to use the same spark session which created in one notebook and need to be used in another notebook in across same environment, Example, if some of the (variable)object got initialized in the first notebook, i need to use the same object in t...

  • 3425 Views
  • 4 replies
  • 5 kudos
Latest Reply
Manoj12421
Valued Contributor II
  • 5 kudos

You can use %run and then use the location of the notebook - %run "/folder/notebookname"

  • 5 kudos
3 More Replies
Rahul2025
by New Contributor III
  • 2005 Views
  • 4 replies
  • 4 kudos

Make environment variables defined in init script available to Spark JVM job?

Hi,We're using Databricks Runtime version 11.3LTS and executing a Spark Java Job using a Job Cluster. To automate the execution of this job, we need to define (source in from bash config files) some environment variables through an init script (clust...

  • 2005 Views
  • 4 replies
  • 4 kudos
Latest Reply
Anonymous
Not applicable
  • 4 kudos

Hi @Rahul K​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thanks!

  • 4 kudos
3 More Replies
thushar
by Contributor
  • 1656 Views
  • 5 replies
  • 0 kudos

Optimize & Compaction

Hi,From which data bricks runtime will support Optimize and compaction

  • 1656 Views
  • 5 replies
  • 0 kudos
Latest Reply
Joe_Suarez
New Contributor III
  • 0 kudos

Optimize and compaction are operations commonly used in Apache Spark for optimizing and improving the performance of data storage and processing. Databricks, which is a cloud-based platform for Apache Spark, provides support for these operations on v...

  • 0 kudos
4 More Replies
Anjum
by New Contributor II
  • 2619 Views
  • 6 replies
  • 1 kudos

PGP encryption and decryption using gnupg

Hi,We are using python-gnupg==0.4.8 package for encryption and decryption and this was working as expected when we are using Databricks runtime : 9.1 LTS but when we upgarded our runtime to 12.1, it stopped working with error "gnupghome should be a d...

  • 2619 Views
  • 6 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hi @Anjum Aara​ Hope everything is going great.Just wanted to check in if you were able to resolve your issue. If yes, would you be happy to mark an answer as best so that other members can find the solution more quickly? If not, please tell us so we...

  • 1 kudos
5 More Replies
mortenhaga
by Contributor
  • 4842 Views
  • 8 replies
  • 10 kudos

Resolved! New strange error on Runtime 12 and above: java.lang.AssertionError: assertion failed

Hi allI struggle to find out why this error message suddenly pops up after running a cell in a notebook. The notebook is trying to run a simple "INSERT INTO" command in SQL. When I only do a SELECT clause, the cell runs without error. Also, I only ge...

  • 4842 Views
  • 8 replies
  • 10 kudos
Latest Reply
entongshen__Dat
New Contributor III
  • 10 kudos

Thanks for reporting! We have identified a defect with an early version of DBR 12 related to INSERT INTO .. SELECT when certain query patterns are involved. The defect has since been fixed. Please let us know if you have any additional questions.

  • 10 kudos
7 More Replies
Labels