cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

coltonflowers
by New Contributor III
  • 2336 Views
  • 1 replies
  • 0 kudos

DLT: Only STREAMING tables can have multiple queries.

I am trying to to do a one-time back-fill on a DLT table following the example here: dlt.table() def test(): # providing a starting version return (spark.readStream.format("delta") .option("readChangeFeed", "true") .option("...

  • 2336 Views
  • 1 replies
  • 0 kudos
Latest Reply
coltonflowers
New Contributor III
  • 0 kudos

I should also add that when I drop the `backfill` function, validation happens successfully and we get the following pipeline DAG:

  • 0 kudos
Sujitha
by Databricks Employee
  • 13313 Views
  • 1 replies
  • 1 kudos

Introducing AI Model Sharing with Databricks!

Today, we're excited to announce that AI model sharing is available in both Databricks Delta Sharing and on the Databricks Marketplace. With Delta Sharing you can now easily share and serve AI models securely within your organization or externally ac...

Screenshot 2024-02-06 at 7.01.48 PM.png
  • 13313 Views
  • 1 replies
  • 1 kudos
Latest Reply
johnsonit
New Contributor II
  • 1 kudos

I'm eager to dive in and leverage these new features to elevate my AI game with Databricks.This is Johnson from KBS Technologies.Thanks for your update.

  • 1 kudos
Sweetness
by New Contributor II
  • 3768 Views
  • 3 replies
  • 0 kudos

Cannot create a repo because the parent path does not exist

I tried following this docWork With Large Monorepos With Sparse Checkout Support in Databricks Repos | Databricks BlogWhen I hook it up to my repos using Azure DevOps Services and check mark Sparse checkout mode, I pass in a subdirectory in my Cone p...

  • 3768 Views
  • 3 replies
  • 0 kudos
Latest Reply
Sweetness
New Contributor II
  • 0 kudos

This is Azure Databricks

  • 0 kudos
2 More Replies
Frantz
by New Contributor III
  • 11010 Views
  • 2 replies
  • 0 kudos

Resolved! Show Existing Header From CSV I External Table

Hello, is there a way to load csv data into an external table without the _c0, _c1 columns showing?

  • 11010 Views
  • 2 replies
  • 0 kudos
Latest Reply
Frantz
New Contributor III
  • 0 kudos

My question was answered in a separate thread here.

  • 0 kudos
1 More Replies
Frantz
by New Contributor III
  • 4007 Views
  • 3 replies
  • 0 kudos

Resolved! Unable to load csv data with correct header values in External tables

Hello, is there a way to load "CSV" data into an external table without the _c0, _c1 columns showing?I've tried using the options within the sql statement that does not appear to work.Which results in this table 

Frantz_0-1707258246022.png Frantz_1-1707258264972.png
Get Started Discussions
External Tables
Unity Catalog
  • 4007 Views
  • 3 replies
  • 0 kudos
Latest Reply
feiyun0112
Honored Contributor
  • 0 kudos

you need set "USING data_source"https://community.databricks.com/t5/data-engineering/create-external-table-using-multiple-paths-locations/td-p/44042 

  • 0 kudos
2 More Replies
Lambda
by New Contributor
  • 1712 Views
  • 1 replies
  • 0 kudos

What is the REST API payload for "not sending alert when alert is back to normal"?

Greeting,I am using REST API to create & schedule sql alert by default these alerts will send notifications when they are back to normal which I don't want, In the UI I have the option to uncheck the box (shown in the picture) but I can't find any do...

Lambda_0-1702306187563.png
  • 1712 Views
  • 1 replies
  • 0 kudos
Latest Reply
Yeshwanth
Databricks Employee
  • 0 kudos

@Lambda Did you refer this documentation: https://docs.databricks.com/api/workspace/alerts/create

  • 0 kudos
Phani1
by Databricks MVP
  • 1928 Views
  • 1 replies
  • 0 kudos

Spark VCore for Databrick vCPU

Hi Team,What is the equivalent Spark VCore for Databrick vCPU? , like for below example for DS3 v2 , vCPU=4 and RAM = 14.00 GiB, would like to know equivalent Spark VCore for DS3 v2 as in  Azure Databricks Pricing | Microsoft AzureRegards,Phanindra

  • 1928 Views
  • 1 replies
  • 0 kudos
Latest Reply
Yeshwanth
Databricks Employee
  • 0 kudos

@Phani1 In Databricks, the vCPU count is equivalent to the number of Spark vCores. This means that if the DS3 v2 instance has 4 vCPU, it would also have 4 Spark vCores. Please note that the Spark VCore count is based on the vCPU count of the underlyi...

  • 0 kudos
AtomicBoy99
by New Contributor
  • 1470 Views
  • 1 replies
  • 0 kudos

Can't enter 'edit mode' using shortcut

Hi,With the recently added AI Assistant to databricks notebooks, I'm having issues entering the 'edit mode' of a notebook cell. Previously, I could simply press the 'Enter' key in order to do this but this no longer works.Is anyone else having the sa...

  • 1470 Views
  • 1 replies
  • 0 kudos
Latest Reply
Yeshwanth
Databricks Employee
  • 0 kudos

@AtomicBoy99, it seems like the fix is in place. Can you confirm if it is working well now?

  • 0 kudos
Kaizen
by Valued Contributor
  • 4602 Views
  • 2 replies
  • 0 kudos

Python Logging cant save log in DBFS

Hi! I am trying to integrate logging into my project. Got the library and logs to work but cant log the file into DBFS directly.Have any of you been able to save and append the log file directly to dbfs? From what i came across online the best way to...

Kaizen_0-1707174350136.png
  • 4602 Views
  • 2 replies
  • 0 kudos
Latest Reply
feiyun0112
Honored Contributor
  • 0 kudos

you can use  azure_storage_loggingSet Python Logging to Azure Blob, but Can not Find Log File there - Stack Overflow

  • 0 kudos
1 More Replies
Jasonh202222
by New Contributor II
  • 6547 Views
  • 2 replies
  • 1 kudos

Databricks notebook how to stop truncating numbers when export the query result to csv

I use Databricks notebook to query databases and export / download result to csv. I just accidentally close a pop-up window asking if need to truncate the numbers, I accidentally chose yes and don't ask again. Now all my long digit numbers are trunca...

  • 6547 Views
  • 2 replies
  • 1 kudos
Latest Reply
shan_chandra
Databricks Employee
  • 1 kudos

@Jasonh202222  - Kindly check the below navigation path user settings -> Account settings -> Display -> Download and Export.  Under Download and Export, Enable the checkbox -  "Prompt for formatting large numbers when downloading or exporting" and cl...

  • 1 kudos
1 More Replies
edmundsecho
by New Contributor II
  • 6480 Views
  • 2 replies
  • 1 kudos

Resolved! Difference between username and account_id

I have a web app that can read files from a person's cloud-based drive (e.g., OneDrive, Google Drive, Dropbox).  The app gets access to the files using OAuth2. The app only ever has access to the files for that user.  Part of the configuration requir...

  • 6480 Views
  • 2 replies
  • 1 kudos
Latest Reply
edmundsecho
New Contributor II
  • 1 kudos

The provided links were helpful.  The take-away* usernames are "globally" unique to an individual; the username is the person's email.* a username can be associated with up to 50 accounts; account_ids track the resources available to the user.This cl...

  • 1 kudos
1 More Replies
sumitdesai
by New Contributor III
  • 3448 Views
  • 0 replies
  • 0 kudos

Using streaming data received from Pub/sub topic

I have a notebook in Databricks in which I am streaming a Pub/sub topic. The code for this looks like following-%pip install --upgrade google-cloud-pubsub[pandas] from pyspark.sql import SparkSession authOptions={"clientId" : "123","clientEmail"...

  • 3448 Views
  • 0 replies
  • 0 kudos
AbdurRehman
by New Contributor II
  • 953 Views
  • 0 replies
  • 1 kudos

Error Signing Up for Databricks Community Edition

@Retired_mod I've been trying to sign up for Databricks Community Edition using different email addresses over the past 24 hours, but I keep getting the error message: "An error has occurred. Please try again later." Can anyone help?Tags: #Databricks...

  • 953 Views
  • 0 replies
  • 1 kudos
Kaizen
by Valued Contributor
  • 5172 Views
  • 2 replies
  • 0 kudos

Resolved! Using Python RPA Library on Databricks

Hi I didn't see any conversations regarding using python RPA package on Data bricks clusters. Is anyone doing this or have gotten it to successfully work on the clusters? I ran into the following errors:1) Initially I was getting the error below rega...

Kaizen_0-1706743633855.png Kaizen_1-1706743734836.png
  • 5172 Views
  • 2 replies
  • 0 kudos
Latest Reply
feiyun0112
Honored Contributor
  • 0 kudos

If you want to capture browser screenshot, you can use playwright%sh pip install playwright playwright install sudo apt-get update playwright install-deps  from playwright.async_api import async_playwright async with async_playwright() as p: ...

  • 0 kudos
1 More Replies
ChristianRRL
by Valued Contributor III
  • 2427 Views
  • 0 replies
  • 0 kudos

Unity Catalog: Databricks *Specific* Features

Good day,Deceptively simple question, are there any "Databricks only" specific features that Unity Catalog offers? I understand that generally speaking enabling UC offers some of the following:Data Discovery and LineageAuditing and MonitoringAccess C...

  • 2427 Views
  • 0 replies
  • 0 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels