cancel
Showing results for 
Search instead for 
Did you mean: 
Community Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Mahajan
by New Contributor II
  • 4205 Views
  • 6 replies
  • 2 kudos

Want to disable cell scrollers.

There are two scrollers visible in my notebook, 1 for cell and another is for notebook. How can i disable cell scroller sicne i am having a hard time to navigate to my code scrolling the cell every time.

Community Discussions
Notebook
scrollers
  • 4205 Views
  • 6 replies
  • 2 kudos
Latest Reply
UmaMahesh1
Honored Contributor III
  • 2 kudos

Hi @Mahajan What exactly do you mean by disabling the cell scroll ?  If and all there is an option as such, then it basically means you can't scroll the cell at all and the cell view is fixed. This makes the cell redundant as at any given point of ti...

  • 2 kudos
5 More Replies
DineshKumar
by New Contributor III
  • 2471 Views
  • 6 replies
  • 0 kudos

Databricks Cluster is going down after installing the external library

 I have created a Databricks cluster with below configurations.Databricks Runtime Version13.2 ML (includes Apache Spark 3.4.0, Scala 2.12)Node typei3.xlarge30.5 GB Memory, 4 CoresI created a notebook and trying to load the Mysql table which resides i...

  • 2471 Views
  • 6 replies
  • 0 kudos
Latest Reply
Debayan
Esteemed Contributor III
  • 0 kudos

Hi, The below error describes that there is an issue connecting to the host from Databricks, you can find more details about the network configurations here at https://docs.databricks.com/administration-guide/cloud-configurations/aws/customer-managed...

  • 0 kudos
5 More Replies
KoenZandvliet
by New Contributor III
  • 779 Views
  • 1 replies
  • 2 kudos

Resolved! Setting cluster permissions with API

I would like to update the permissions of a cluster using the API. Documentation mentions the following: patch api/2.0/permissions/{request_object_type}/{request_object_id}.Which {request_object_type} to use? ‘cluster’, ‘cluster’ and ‘compute’ are no...

  • 779 Views
  • 1 replies
  • 2 kudos
Latest Reply
daniel_sahal
Esteemed Contributor
  • 2 kudos

@KoenZandvliet clusters is the one you should be looking for.

  • 2 kudos
NewContributor
by New Contributor III
  • 2751 Views
  • 8 replies
  • 3 kudos

Resolved! Databricks Certified Data Engineer Associate (Version 2) Exam got suspended

Hi Team,My Databricks Certified Data Engineer Associate (Version 2) exam got suspended today and it is in suspended state.I was there continuously in front of the camera and suddenly the alert appeared and support person asked me to show the full tab...

  • 2751 Views
  • 8 replies
  • 3 kudos
Latest Reply
Rob_79
New Contributor II
  • 3 kudos

Hi @Kaniz_Fatma ,I've been into the same situation as Shifa and I've also raised ticket with Databricks but no feedback yet!Can you please help on that?Cheers,Rabie

  • 3 kudos
7 More Replies
nadishancosta
by New Contributor II
  • 704 Views
  • 2 replies
  • 0 kudos

Cannot access community account

Resetting password does not work. After I enter my new password, it just keeps processing. I waited for over 10 minutes, tried on different browsers, tried on a VPN, nothing works. Also this randomly happened. I didnt forget my password, just the sys...

  • 704 Views
  • 2 replies
  • 0 kudos
Latest Reply
nadishancosta
New Contributor II
  • 0 kudos

Its for the Community Edition

  • 0 kudos
1 More Replies
aupres
by New Contributor III
  • 1703 Views
  • 3 replies
  • 0 kudos

Resolved! How to generate schema with org.apache.spark.sql.functions.schema_of_csv?

Hello, I use spark 3.4.1-hadooop 3 on windows 11. And I am struggling to generate the schema of csv data with schema_of csv function. Below is my java codes. Map<String, String> kafkaParams = new HashMap<>(); kafkaParams.put("kafka.bootstrap.servers"...

Community Discussions
schema_of_csv
spark-java
  • 1703 Views
  • 3 replies
  • 0 kudos
Latest Reply
aupres
New Contributor III
  • 0 kudos

I use org.apache.spark.sql.functions.lit method and solve this issue. Thank you any way.

  • 0 kudos
2 More Replies
zyang
by Contributor
  • 5869 Views
  • 5 replies
  • 3 kudos

Sync the production data in environment into test environment

Hello,I have a database called sales which contain several delta tables and views in both production and test workspace. But the data is not synced because some people develop the code in test workspace. As time passed, both the data and the tables i...

  • 5869 Views
  • 5 replies
  • 3 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 3 kudos

Hi @zyang,  To sync data and tables/views between production and test workspaces in Azure, the recommended approach is to use the Databricks Sync (DBSync) project, which is an object synchronization tool that backs up, restores, and syncs Databricks ...

  • 3 kudos
4 More Replies
Chris_Shehu
by Valued Contributor III
  • 476 Views
  • 0 replies
  • 0 kudos

Feature Request: GUI: Additional Collapse options

When you're using a very large notebook sometimes it gets frustrating scrolling through all the code blocks. It would be nice to have a few additional options to make this easier. 1) Add a collapse all code cells button to the top.2) Add a collapse a...

Community Discussions
Enhancement
Feature
GUI
Request
  • 476 Views
  • 0 replies
  • 0 kudos
Oliver_Angelil
by Valued Contributor II
  • 1772 Views
  • 2 replies
  • 2 kudos

Resolved! Confirmation that Ingestion Time Clustering is applied

The article on Ingestion Time Clustering mentions that "Ingestion Time Clustering is enabled by default on Databricks Runtime 11.2", however how can I confirm is it active for my table? For example, is there a:True/False "Ingestion Time Clustered" fl...

  • 1772 Views
  • 2 replies
  • 2 kudos
Latest Reply
Oliver_Angelil
Valued Contributor II
  • 2 kudos

Thanks @NandiniN, that was very helpful. I have 3 follow-up questions:If I already have a table (350GB) that has been partitioned by 3 columns: Year, Month, Day, and stored in the hive-style with subdirectories: Year=X/Month=Y/Day=Z, can I read it in...

  • 2 kudos
1 More Replies
Dekova
by New Contributor II
  • 1178 Views
  • 1 replies
  • 1 kudos

Resolved! Photon and UDF efficiency

When using a JVM engine, Scala UDFs have an advantage over Python UDFs because data doesn't have to be shifted out to the Python environment for processing. If I understand the implications of using the Photon C++ engine, any processing that needs to...

  • 1178 Views
  • 1 replies
  • 1 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 1 kudos

Photon does not support UDFs:https://learn.microsoft.com/en-us/azure/databricks/runtime/photon#limitationsSo when creating a UDF, photon will not be used.

  • 1 kudos
Dekova
by New Contributor II
  • 320 Views
  • 0 replies
  • 0 kudos

Structured Streaming and Workplace Max Jobs

From the documentation: A workspace is limited to 1000 concurrent task runs. A 429 Too Many Requests response is returned when you request a run that cannot start immediately.The number of jobs a workspace can create in an hour is limited to 10000 (i...

  • 320 Views
  • 0 replies
  • 0 kudos
SSV_dataeng
by New Contributor II
  • 699 Views
  • 2 replies
  • 0 kudos

Plot number of abandoned cart items by product

abandoned_carts_df = (email_carts_df.filter(col('converted') == False).filter(col('cart').isNotNull()))display(abandoned_carts_df) abandoned_items_df = (abandoned_carts_df.select(col("cart").alias("items")).groupBy("items").count())display(abandoned_...

SSV_dataeng_0-1690194232666.png
  • 699 Views
  • 2 replies
  • 0 kudos
Latest Reply
NandiniN
Honored Contributor
  • 0 kudos

Hi @SSV_dataeng ,Try abandoned_items_df = (abandoned_carts_df.withColumn("items", explode("cart")).groupBy("items").count().sort("items"))

  • 0 kudos
1 More Replies
SSV_dataeng
by New Contributor II
  • 912 Views
  • 4 replies
  • 0 kudos

write to Delta

spark.conf.set("spark.databricks.delta.properties.defaults.columnMapping.mode","name")products_output_path = DA.paths.working_dir + "/delta/products"products_df.write.format("delta").save(products_output_path) verify_files = dbutils.fs.ls(products_ou...

  • 912 Views
  • 4 replies
  • 0 kudos
Latest Reply
NandiniN
Honored Contributor
  • 0 kudos

Hi @SSV_dataeng ,Please check with this (you would have to indent it correctly for python)productsOutputPath = DA.workingDir + "/delta/products"(productsDF.write.format("delta").mode("overwrite").save(productsOutputPath))verify_files = dbutils.fs.ls(...

  • 0 kudos
3 More Replies
marchino
by New Contributor II
  • 2644 Views
  • 4 replies
  • 1 kudos

Can I change Service Principal's OAuth token's expiration date?

Hi,since I have to read from a Databricks table from an external API I created a Service Principal that would start a cluster and perform the operation, to authenticate the request on behalf of the Service Principal I generate the OAuth token followi...

  • 2644 Views
  • 4 replies
  • 1 kudos
Latest Reply
NandiniN
Honored Contributor
  • 1 kudos

Hello @marchino Please check if this is of your interest https://kb.databricks.com/en_US/security/set-an-unlimited-lifetime-for-service-principal-access-token 

  • 1 kudos
3 More Replies
Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!