cancel
Showing results for 
Search instead for 
Did you mean: 
Community Discussions
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

subham0611
by New Contributor II
  • 258 Views
  • 1 replies
  • 0 kudos

How does coalesce works internally

Hi Databricks team,I am trying to understand internals of spark coalesce code(DefaultPartitionCoalescer) and going through spark code for this. While I understood coalesce function but I am not sure about complete flow of code like where its get call...

  • 258 Views
  • 1 replies
  • 0 kudos
Latest Reply
raphaelblg
Contributor III
  • 0 kudos

  Hello @subham0611 , The coalesce operation triggered from user code can be initiated from either an RDD or a Dataset, with each having distinct codepaths: RDD: https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/rdd/RDD...

  • 0 kudos
DavidKxx
by New Contributor III
  • 341 Views
  • 5 replies
  • 1 kudos

Can't create branch of public git repo

Hi,I have cloned a public git repo into my Databricks account.  It's a repo associated with an online training course.  I'd like to work through the notebooks, maybe make some changes and updates, etc., but I'd also like to keep a clean copy of it. M...

  • 341 Views
  • 5 replies
  • 1 kudos
Latest Reply
NandiniN
Valued Contributor III
  • 1 kudos

I get your issue, @DavidKxx. Until we do a git push on command line we do not see the Authentication failed git push origin test While in the Databricks UI, we fail early(screenshots below). We require the Databricks GitHub App as mentioned here to p...

  • 1 kudos
4 More Replies
Cloud_Architect
by Visitor
  • 23 Views
  • 0 replies
  • 0 kudos

To generate DBU consumption report

I need to access the following system tables to generate a DBU consumption report, but I am not seeing this table in the system schema. Could you please help me access it?system.billing.inventory, system.billing.workspaces, system.billing.job_usage, ...

  • 23 Views
  • 0 replies
  • 0 kudos
MohsenJ
by New Contributor III
  • 1095 Views
  • 6 replies
  • 0 kudos

log signature and input data for Spark LinearRegression

I am looking for a way to log my `pyspark.ml.regression.LinearRegression` model with input and signature ata. The usual example that I found around are using sklearn and they can simply do  # Log the model with signature and input example signature =...

Community Discussions
mlflow
model_registray
  • 1095 Views
  • 6 replies
  • 0 kudos
Latest Reply
javierbg
New Contributor II
  • 0 kudos

@Abi105 I wasn't able to make it work, sorry

  • 0 kudos
5 More Replies
Neeraj_Kumar
by New Contributor
  • 123 Views
  • 1 replies
  • 0 kudos

Issues with Runtime 15.1/15.2Beta in shared access mode

We have been using runtime 14.2, share mode for our computing cluster in Databrick for quite some time.  We are now trying to upgrade to python 3.11 for some dependencies mangement, thereby requiring us to use runtime 15.1/15.2  as runtime 14.2 only ...

  • 123 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Neeraj_Kumar,  Ensure that the necessary libraries are available in the repository used for installation.Verify that the library versions specified are correct and available.Consider installing the library with a different version or from a diffe...

  • 0 kudos
georgeyjy
by Visitor
  • 83 Views
  • 2 replies
  • 0 kudos

Resolved! Why saving pyspark df always converting string field to number?

  import pandas as pd from pyspark.sql.types import StringType, IntegerType from pyspark.sql.functions import col save_path = os.path.join(base_path, stg_dir, "testCsvEncoding") d = [{"code": "00034321"}, {"code": "55964445226"}] df = pd.Data...

  • 83 Views
  • 2 replies
  • 0 kudos
Latest Reply
daniel_sahal
Esteemed Contributor
  • 0 kudos

@georgeyjy Try opening the CSV as text editor. I bet that Excel is automatically trying to detect the schema of CSV thus it thinks that it's an integer.

  • 0 kudos
1 More Replies
Madhawa
by New Contributor II
  • 57 Views
  • 2 replies
  • 0 kudos

Resolved! Unable to access AWS S3 - Error : java.nio.file.AccessDeniedException

Reading file like this "Data = spark.sql("SELECT * FROM edge.inv.rm") Getting this error org.apache.spark.SparkException: Job aborted due to stage failure: Task 10 in stage 441.0 failed 4 times, most recent failure: Lost task 10.3 in stage 441.0 (TID...

  • 57 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Madhawa,  Ensure that the AWS credentials (access key and secret key) are correctly configured in your Spark application. You can set them using spark.conf.set("spark.hadoop.fs.s3a.access.key", "your_access_key") and spark.conf.set("spark.hadoop....

  • 0 kudos
1 More Replies
Shravanshibu
by New Contributor III
  • 131 Views
  • 1 replies
  • 0 kudos

Unable to install a wheel file which is in my volume to a serverless cluster

I am trying to install a wheel file which is in my volume to a serverless cluster, getting the below error@ken@Kaniz Note: you may need to restart the kernel using %restart_python or dbutils.library.restartPython() to use updated packages. WARNING: R...

  • 131 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Shravanshibu, Verify that the wheel file is actually present at the specified location. Double-check the path to ensure there are no typos or missing directories.Remember that Databricks mounts DBFS (Databricks File System) at /dbfs on cluster no...

  • 0 kudos
_databreaks
by New Contributor II
  • 119 Views
  • 1 replies
  • 0 kudos

DLT to push data instead of a pull

I am relatively new to Databricks, and from my recent experience it appears that every step in a DLT Pipeline, we define each LIVE TABLES (be it streaming or not) to pull data upstream.I have yet to see an implementation where data from upstream woul...

Community Discussions
dlt
DLT pipeline
  • 119 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @_databreaks, You’re absolutely right! While the typical approach in Databricks involves pulling data from upstream sources into downstream tables, there are scenarios where a push-based architecture could be beneficial.  Pull-Based Architectu...

  • 0 kudos
RobsonNLPT
by Contributor
  • 294 Views
  • 1 replies
  • 0 kudos

Databricks UC Data Lineage Official Limitations

Hi all.I have a huge data migration project using medallion architecture,  UC, notebooks and workflows . One of the relevant requirements we have is to capture all data dependencies (upstreams and downstreams) using data lineage. I've followed all re...

  • 294 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

 Hi @RobsonNLPT,  Consider checking the documentation for any updates or upcoming features related to capturing CTEs as upstreams in your chosen solution.

  • 0 kudos
devendra_tomar
by New Contributor
  • 62 Views
  • 1 replies
  • 0 kudos

How to Read Data from Databricks Worker Nodes in Unity Catalog Volume

I am currently working on a similarity search use case where we need to extract text from PDF files and create a vector index. We have stored our PDF files in a Unity Catalog Volume, and I can successfully read these files from the driver node.Here's...

  • 62 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @devendra_tomar,  Unity Catalog volumes represent logical storage volumes in a cloud object storage location. They allow governance over non-tabular datasets, providing capabilities for accessing, storing, and organizing files.While tables govern ...

  • 0 kudos
NarenderKumar
by New Contributor III
  • 582 Views
  • 3 replies
  • 0 kudos

Resolved! Unable to generate account level PAT for service principle

I am trying to generate PAT for a service principle.I am following the documentation as shown below:https://docs.databricks.com/en/dev-tools/auth/oauth-m2m.html#create-token-in-accountI have prepared the below curl command:I am getting below error:Pl...

NarenderKumar_0-1715695724302.png NarenderKumar_1-1715695859890.png NarenderKumar_2-1715695895738.png
  • 582 Views
  • 3 replies
  • 0 kudos
Latest Reply
NarenderKumar
New Contributor III
  • 0 kudos

I was able to generate the workspace level token using the databricks cli.I set the following details in the databricks cli profile(.databrickscfg) file: host  = https://myworksapce.azuredatabricks.net/ account_id = (my db account id)client_id     = ...

  • 0 kudos
2 More Replies
jensen22
by Contributor
  • 436 Views
  • 1 replies
  • 0 kudos

[Delta live table vs Workflow]

Hi Community Members,I have been using Databricks for a while, but I have only used Workflow. I have a question about the differences between Delta Live Table and Workflow. Which one should we use in which scenario?Thanks,

  • 436 Views
  • 1 replies
  • 0 kudos
Latest Reply
" src="" />
This widget could not be displayed.
This widget could not be displayed.
This widget could not be displayed.
  • 0 kudos

This widget could not be displayed.
Hi Community Members,I have been using Databricks for a while, but I have only used Workflow. I have a question about the differences between Delta Live Table and Workflow. Which one should we use in which scenario?Thanks,

This widget could not be displayed.
  • 0 kudos
This widget could not be displayed.
kazinahian
by New Contributor III
  • 803 Views
  • 2 replies
  • 1 kudos

Resolved! Enable or disable Databricks Assistant in the Community Edition.

Hello,Good afternoon great people. I was following the step-by-step instructions to enable or disable Databricks Assistant in my Databricks Community Edition to enable the AI assistance. However, I couldn't find the option and was unable to enable it...

Community Discussions
datbricks community
  • 803 Views
  • 2 replies
  • 1 kudos
Latest Reply
kazinahian
New Contributor III
  • 1 kudos

Thank you @Kaniz 

  • 1 kudos
1 More Replies