cancel
Showing results for 
Search instead for 
Did you mean: 
Community Platform Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Henrik
by New Contributor III
  • 2573 Views
  • 1 replies
  • 0 kudos

Delta Live Tables and GIT

Notebooks that runs in a delta live table are GIT enabled, but what about the Delta Live Table pipeline?I'm looking for a good way to deploy pipelines from DEV to TEST and TEST to PROD that not just deploy the notebooks but also the pipeline.What pos...

  • 2573 Views
  • 1 replies
  • 0 kudos
Latest Reply
NandiniN
Databricks Employee
  • 0 kudos

Hello @Henrik , Databricks Asset Bundles would help you do this - https://docs.databricks.com/en/dev-tools/bundles/pipelines-tutorial.html Also this is a wonderful post addressing your query - https://www.databricks.com/blog/applying-software-develop...

  • 0 kudos
gupta_tanmay
by New Contributor II
  • 159 Views
  • 2 replies
  • 0 kudos

Enable delta sharing oss using pyspark.

I have started the delta sharing server.Using guide on https://github.com/delta-io/delta-sharing.But I am not able to create a profile.How can I correctly create profile and share the table through delta sharing with the data stored in MinIO? 

  • 159 Views
  • 2 replies
  • 0 kudos
Latest Reply
james598keen
New Contributor II
  • 0 kudos

@gupta_tanmay wrote:I have started the delta sharing server.Using guide on https://github.com/delta-io/delta-sharing.But I am not able to create a profile.How can I correctly create profile and share the table through delta sharing with the data stor...

  • 0 kudos
1 More Replies
yjiao
by New Contributor
  • 848 Views
  • 1 replies
  • 0 kudos

Use DataBricks migration tool to export query

Dear all,I tried to use Databricks migration tool (https://github.com/databrickslabs/migrate) to migrate objects from one Databricks instance to another. I realized that notebooks, clusters, jobs can be done but queries can not be migrated by this to...

  • 848 Views
  • 1 replies
  • 0 kudos
Latest Reply
thelogicplus
New Contributor II
  • 0 kudos

@yjiao  If you're planning to migrate from your current technology to Databricks, Travinto Technologies' Code Converter Tool is here to make the process seamless. This powerful tool enables you to migrate data, ETL workflows, and reports across platf...

  • 0 kudos
data-warriors
by New Contributor
  • 1151 Views
  • 1 replies
  • 0 kudos

URGENT - Databricks workspace deletion & recovery

Hi Team,I accidentally deleted our databricks workspace, which had all our artefacts and control plane, and was the primary resource for our team's working environment.Could anyone please help on priority, regarding the recovery/ restoration mechanis...

  • 1151 Views
  • 1 replies
  • 0 kudos
Latest Reply
steyler-db
Databricks Employee
  • 0 kudos

Hello data-warriors Note: Users cannot recover the deleted Databricks instance directly from Azure Portal. Deleted databricks instance can be only recovered by opening a support ticket, where our core engineering team will help you to recover the Da...

  • 0 kudos
invalidargument
by New Contributor III
  • 140 Views
  • 2 replies
  • 0 kudos

Disable mlflow autologging in a helper notebook

We have a helper function that uses a sklearn estimator. We don't want to to be logged to mlflow.I can dodef myfunc():    import mlflow    with mlflow.autolog.ignore:        # train model        # use model    return predictionsBut I get info prints:...

  • 140 Views
  • 2 replies
  • 0 kudos
Latest Reply
invalidargument
New Contributor III
  • 0 kudos

mlflow.autolog(disable=True, silent=True) fixes the printing. But my other problem with setting autologging back to previous state is still unsolved. I can't find any information about that problem in the docs.

  • 0 kudos
1 More Replies
alluarjun
by New Contributor
  • 959 Views
  • 6 replies
  • 0 kudos

databricks asset bundle error-terraform.exe": file does not exist

Hi,I am getting below error while I am deploying databricks bundle using azure devops release  2024-07-07T03:55:51.1199594Z Error: terraform init: exec: "xxxx\\.databricks\\bundle\\dev\\terraform\\xxxx\\.databricks\\bundle\\dev\\bin\\terraform.exe": ...

  • 959 Views
  • 6 replies
  • 0 kudos
Latest Reply
BNG_FGA
New Contributor II
  • 0 kudos

Using Git Bash for bash execution and to setup variables we did this by going to control panel -> system -> environment varibales

  • 0 kudos
5 More Replies
scharly3
by New Contributor II
  • 13703 Views
  • 8 replies
  • 1 kudos

Error: Folder xxxx@xxx.com is protected

Hello, On Azure Databricks i'm trying to remove a folder on the Repos folder using the following command : databricks workspace delete "/Repos/xxx@xx.com"I got the following error message:databricks workspace delete "/Repos/xxxx@xx.com"Error: Folder ...

  • 13703 Views
  • 8 replies
  • 1 kudos
Latest Reply
Walter_C
Databricks Employee
  • 1 kudos

Can you share with me an screenshot of the folder?

  • 1 kudos
7 More Replies
ksenija
by Contributor
  • 1385 Views
  • 3 replies
  • 0 kudos

Foreign table to delta streaming table

I want to copy a table from a foreign catalog as my streaming table. This is the code I used but I am getting error: Table table_name does not support either micro-batch or continuous scan.; spark.readStream                .table(table_name)         ...

  • 1385 Views
  • 3 replies
  • 0 kudos
Latest Reply
cgrant
Databricks Employee
  • 0 kudos

What is the underlying type of the table you are trying to stream from? Structured Streaming does not currently support streaming reads via JDBC, so reading from MySQL, Postgres, etc are not supported. If you are trying to perform stream ingestion fr...

  • 0 kudos
2 More Replies
crowley
by New Contributor III
  • 2709 Views
  • 1 replies
  • 0 kudos

How are Struct type columns stored/accessed (interested in efficiency)?

Hello, I've searched around for awhile and didn't find a similar question here or elsewhere, so thought I'd ask...I'm assessing the storage/access efficiency of Struct type columns in delta tables.  I want to know more about how Databricks is storing...

  • 2709 Views
  • 1 replies
  • 0 kudos
Latest Reply
cgrant
Databricks Employee
  • 0 kudos

Delta Lake uses Apache Parquet as the underlying format for its data files. Spark structs are encoded as Parquet SchemaElements, which are simply wrappers around standard types. What this means is that storage and access characteristics should be ide...

  • 0 kudos
varunep
by New Contributor II
  • 210 Views
  • 3 replies
  • 0 kudos

Certification exam got suspended

Hello Team,My Data engineer associate exam got suspended within seconds of starting without any reason. After starting the exam screen got paused just after 10-20 seconds there was a notice that someone will contact you but no one contacted till the ...

  • 210 Views
  • 3 replies
  • 0 kudos
Latest Reply
BigRoux
Databricks Employee
  • 0 kudos

You need to raise a ticket with Databricks to get to a resolution: https://help.databricks.com/s/contact-us?ReqType=training%22%20%5Ct%20%22_blank

  • 0 kudos
2 More Replies
hrishiharsh25
by New Contributor
  • 97 Views
  • 1 replies
  • 0 kudos

Liquid Clustering

How can I use column for liquid clustering that is not in first 32 column of my delta table schema.

  • 97 Views
  • 1 replies
  • 0 kudos
Latest Reply
PotnuruSiva
Databricks Employee
  • 0 kudos

We can only specify columns with statistics collected for clustering keys. By default, the first 32 columns in a Delta table have statistics collected. See Specify Delta statistics columns. We can use the below workaround for your use case: 1. Use th...

  • 0 kudos
RajPutta
by New Contributor
  • 630 Views
  • 1 replies
  • 0 kudos

Databricks migration

How easy to migrate from snowflake or redshift to Databricks ?

  • 630 Views
  • 1 replies
  • 0 kudos
Latest Reply
thelogicplus
New Contributor II
  • 0 kudos

Hi @RajPutta  : it very easy to migrate any thing to Databrick only thing required is your team knowledge on databrick platform .   below steps is important.DiscoveryAssessment Code conversion and Migration is very importPoCif you want to migrate fro...

  • 0 kudos
ismaelhenzel
by Contributor
  • 2200 Views
  • 3 replies
  • 3 kudos

Failure when deploying a custom serving endpoint LLLM

I'm currently experimenting with vector search using Databricks. Everything runs smoothly when I load the model deployed in Unity Catalog into a notebook session and ask questions using Python. However, when I attempt to serve it, I encounter a gener...

ismaelhenzel_0-1715091515103.png
  • 2200 Views
  • 3 replies
  • 3 kudos
Latest Reply
brycejune
New Contributor III
  • 3 kudos

Ensure your vector_search_endpoint_name and vs_index_fullname match the deployment setup. Check model deployment logs for detailed errors and confirm your workspace's network settings allow access to Unity Catalog and model serving endpoints in a pri...

  • 3 kudos
2 More Replies
shrikant_kulkar
by New Contributor III
  • 2814 Views
  • 2 replies
  • 2 kudos

c# connector for databricks Delta Sharing

Any plans for adding c# connector? What are alternate ways in current state? 

  • 2814 Views
  • 2 replies
  • 2 kudos
Latest Reply
Shawn_Eary
Contributor
  • 2 kudos

I'm having problems getting the REST API calls for Delta Sharing to work. Python and Power BI work fine but the C# code that Databricks AI generates does not work. I keep getting an "ENDPOINT NOT FOUND" error even though config.share is fine.A C# con...

  • 2 kudos
1 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Top Kudoed Authors