by
Rinat
• New Contributor
- 2892 Views
- 0 replies
- 0 kudos
I know you can set "spark.sql.shuffle.partitions" and "spark.sql.adaptive.advisoryPartitionSizeInBytes". The former will not work with adaptive query execution, and the latter only works for the first shuffle for some reason, after which it just uses...
- 2892 Views
- 0 replies
- 0 kudos
by
AJK1
• New Contributor II
- 6311 Views
- 0 replies
- 0 kudos
- 6311 Views
- 0 replies
- 0 kudos
- 4828 Views
- 3 replies
- 2 kudos
Hello,I would like to know if it is possible to filter a dashboard by the current user email?For example, I have a table result of a group of people with the following columns: user_id, user_email, date, productivity. So with this table I create som...
- 4828 Views
- 3 replies
- 2 kudos
Latest Reply
Hey guys, After some research on the documentation, I found out that if a filter the query using the current_user() function, I will get the result that I was looking for.If anyone need look at this:https://docs.databricks.com/sql/language-manual/fun...
2 More Replies
- 1557 Views
- 0 replies
- 0 kudos
import asyncioimport osfrom azure.eventhub.aio import EventHubConsumerClientCONNECTION_STR = "Connection_string"EVENTHUB_NAME = "event_hub"async def on_event(partition_context, event): # Put your code here. # If the operation is i/o intensive, ...
- 1557 Views
- 0 replies
- 0 kudos
by
Teja07
• New Contributor II
- 7395 Views
- 0 replies
- 0 kudos
While ingesting the data from oracle to databricks through IICS, target table were created however data is not getting inserted. Below is the error. Could someone please help meException occurred when initializing data session. Root cause: java.lang....
- 7395 Views
- 0 replies
- 0 kudos
- 2739 Views
- 3 replies
- 0 kudos
Hey there, I am using dbx to create Databricks tasks and deploy the job. I find it not ideal since the iteration circles are sometimes a bit long when I have to wait for a job with several tasks to complete and see where it failed. I am already tryin...
- 2739 Views
- 3 replies
- 0 kudos
Latest Reply
Hello, thanks for the answer. Unfortunately, this did not help me, since it is general best practice. @Debayan Mukherjee​
2 More Replies
by
Thor
• New Contributor III
- 8051 Views
- 0 replies
- 0 kudos
I already improved a lot the performances of our ETL (x20 !) but I still want to know where I can improve performances. I seems that tables stats and column indexing slow down a bit writings so I want to decrease dataSkippingNumIndexedCols to match t...
- 8051 Views
- 0 replies
- 0 kudos
by
de-hru
• New Contributor III
- 30976 Views
- 4 replies
- 1 kudos
I'd like to add a Git pre-commit hook to the Databricks Cluster.This pre-commit hook should be executed when pushing to GitHub.Why would I need a pre-commit hook on a Databricks Cluster?My goal is to run blackbricks and format all notebooks automatic...
- 30976 Views
- 4 replies
- 1 kudos
Latest Reply
Hi @Dejan Hrubenja​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Tha...
3 More Replies
by
sanjay
• Valued Contributor II
- 8159 Views
- 0 replies
- 0 kudos
Hi,I have data pipeline which is running continuously, processes the micro batch data and store data in delta lake. This is taking care of any new data.But at times, I need to process historical data without disturbing real time data processing.Is th...
- 8159 Views
- 0 replies
- 0 kudos
- 2474 Views
- 2 replies
- 1 kudos
I am trying to find documents/flows that show Databricks' network setup for e2 workspaces. More specifically, I'm interested in how DNS is resolved on AWS. All the pages I could find were regarding using route53 and privatelink for custom dns. But pl...
- 2474 Views
- 2 replies
- 1 kudos
Latest Reply
Hi @A H​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thanks!
1 More Replies
by
Aj2
• Databricks Partner
- 15789 Views
- 4 replies
- 1 kudos
What are the steps needed to connect to a DB2-AS400 source to pull data to lake using Databricks? I believe it requires establishing a jdbc connection, but I couldnot find much details online
- 15789 Views
- 4 replies
- 1 kudos
Latest Reply
Hi @Ajay Menon​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thanks!
3 More Replies
- 7544 Views
- 2 replies
- 0 kudos
I'm using Azure Databricks notebook to read a excel file from a folder inside a mounted Azure blob storage. The mounted excel location is like : "/mnt/2023-project/dashboard/ext/Marks.xlsx". 2023-project is the mount point and dashboard is the name o...
- 7544 Views
- 2 replies
- 0 kudos
Latest Reply
Hi @vichus1995​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thanks!
1 More Replies
by
Jits
• New Contributor II
- 2475 Views
- 2 replies
- 3 kudos
Hi All,I am creating table using Databricks SQL editor. The table definition isDROP TABLE IF EXISTS [database].***_test;CREATE TABLE [database].***_jitu_test( id bigint)USING deltaLOCATION 'test/raw/***_jitu_test'TBLPROPERTIES ('delta.minReaderVersi...
- 2475 Views
- 2 replies
- 3 kudos
Latest Reply
Hi @jitendra goswami​ We haven't heard from you since the last response from @Werner Stinckens​ r​, and I was checking back to see if her suggestions helped you.Or else, If you have any solution, please share it with the community, as it can be helpf...
1 More Replies
- 3550 Views
- 2 replies
- 2 kudos
in dbx community edition, the autoloader works using the s3 mount. s3 mount, autoloader:dbutils.fs.mount(f"s3a://{access_key}:{encoded_secret_key}@{aws_bucket_name}", f"/mnt/{mount_name}from pyspark.sql import SparkSession
from pyspark.sql.functions ...
- 3550 Views
- 2 replies
- 2 kudos
Latest Reply
Hi @Joe Gorse​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers you...
1 More Replies
by
signo
• New Contributor II
- 9060 Views
- 2 replies
- 2 kudos
Databricks Runtime: 12.2 LTS, Spark: 3.3.2, Delta Lake: 2.2.0A target table with schema ([c1: integer, c2: integer]), allows us to write into target table using data with schema ([c1: integer, c2: double]). I expected it to throw an exception (same a...
- 9060 Views
- 2 replies
- 2 kudos
Latest Reply
Hi @Sigrun Nordli​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers...
1 More Replies