- 1234 Views
- 5 replies
- 1 kudos
How do I contact billing support? I am billed through AWS Marketplace and noticed last month the SQL Pro discount is not being reflected in my statement.
- 1234 Views
- 5 replies
- 1 kudos
Latest Reply
Hi, could anybody provide a contact email? I have sent emails to many contacts described in the support page here and in AWS, but no response from any channel. My problem is that databricks charged me by the resources used during a free trial, what i...
4 More Replies
by
LukeD
• New Contributor II
- 416 Views
- 3 replies
- 1 kudos
Hi,What is the best way to contact Databricks support? I see the differences between AWS billing and Databricks report and I'm looking for explanation of that. I've send 3 messages last week by this form https://www.databricks.com/company/contact but...
- 416 Views
- 3 replies
- 1 kudos
Latest Reply
Hi, I'm facing the same issue with signing in my workspace, and I have a billing error, databricks charged me for a free trial, and I have sent a lot of emails, posted a topic in the community, I contacted people in AWS and they said that it must be ...
2 More Replies
by
MCosta
• New Contributor III
- 6996 Views
- 11 replies
- 20 kudos
Hi ML folks,
We are using Databricks to train deep learning models. The code, however, has a complex structure of classes. This would work fine in a perfect bug-free world like Alice in Wonderland.
Debugging in Databricks is awkward. We ended up do...
- 6996 Views
- 11 replies
- 20 kudos
Latest Reply
Has this been solved yet; a mature way to debug code on databricks. I'm running in the same kind of issue.Variable explorer can be used and pdb, but not the same really..
10 More Replies
- 868 Views
- 2 replies
- 1 kudos
My questions is pretty straightforward - how big should a delta table be to benefit from liquid clustering? I know the answer will most likely depend on the details of how you are querying the data, but what is the recommendation?I know Databricks re...
- 868 Views
- 2 replies
- 1 kudos
Latest Reply
@DatBoi Once you watch this video you'll understand more about Liquid Clustering https://www.youtube.com/watch?v=5t6wX28JC_M&ab_channel=DeltaLakeLong story short:I know Databricks recommends not partitioning on tables less than 1 TB and aim for 1 GB ...
1 More Replies
- 572 Views
- 3 replies
- 0 kudos
Hi All,We are using cluster with 9.1 run time version, I'm getting "incompatible schema exception" error while writing the data into avro file. Fields in Avro schema are more compared to dataframe output Fields. I tried the same in community edition ...
- 572 Views
- 3 replies
- 0 kudos
- 1228 Views
- 1 replies
- 0 kudos
Suppose I have 1000's of historical .csv files stored from Jan, 2022 in a folder of my azure blob storage container. I want to use auto loader to read files beginning only on 1st, Oct, 2023 and ignoring all the files before this date to build a pipel...
- 1228 Views
- 1 replies
- 0 kudos
Latest Reply
@BhaveshPatel Three things that you can do:- Move the files to the separate folder,- Use a filter on metadata fields to filter out the unnecessary files,- Use a pathGlobFilter to filter only on the files you need
- 463 Views
- 3 replies
- 0 kudos
I'm using a Python UDF to apply OCR to each row of a dataframe which contains the URL to a PDF document. This is how I define my UDF: def extract_text(url: str):
ocr = MyOcr(url)
extracted_text = ocr.get_text()
return json.dumps(extracte...
- 463 Views
- 3 replies
- 0 kudos
Latest Reply
@Bharathi7 It's really hard to determine what's going on without knowing what acutally MyOcr function does.Maybe there's some kind of timeout on the service side? To many parallell connections?
2 More Replies
- 1122 Views
- 4 replies
- 0 kudos
I am trying to enable the Serverless mode in the Delta Live Tables, based on what the official Databricks channel YouTube video "Delta Live Tables A to Z: Best practices for Modern Data Pipelines".And I cannot find it in my UI. Could you help me with...
- 1122 Views
- 4 replies
- 0 kudos
Latest Reply
The problem is that the "Serverless" checkbox does not appear in my UI Pipeline Settings. So, I do not know how to enable serverless given your instructions. Can you tell me why the button is not displayed or how to display it or how to enable DLT se...
3 More Replies
- 645 Views
- 2 replies
- 0 kudos
Hi,Currently, I am using the below-mentioned query to create a materialized view. It was working fine until yesterday in the DLT pipeline, but from today on, the below-provided code throws an error (com.databricks.sql.transaction.tahoe.ColumnMappingE...
- 645 Views
- 2 replies
- 0 kudos
Latest Reply
Hi @Poovarasan, The error message you’re encountering, com.databricks.sql.transaction.tahoe.ColumnMappingException: Found duplicated column id 2 in column, indicates that there is a conflict related to column IDs in your query.
Let’s break down the ...
1 More Replies
by
elgeo
• Valued Contributor II
- 5652 Views
- 6 replies
- 4 kudos
Hello. Is there a way to enforce the length of a column in SQL? For example that a column has to be exactly 18 characters? Thank you!
- 5652 Views
- 6 replies
- 4 kudos
Latest Reply
we are facing similar issues while write into adls location delta format, after that we created on top delta location unity catalog tables. below format of data type length should be possible to change spark sql supported ?Azure SQL Spark ...
5 More Replies
by
NT911
• New Contributor II
- 463 Views
- 1 replies
- 0 kudos
I have shape files with polygon/geometry info. I am exporting the file after Sedona integration with Kepler.I o/p file is in .html. I want to reduce the file size.Pls suggest in case any option is available.
- 463 Views
- 1 replies
- 0 kudos
Latest Reply
Hi @NT911, When dealing with shape files and trying to reduce the file size, there are a few strategies you can consider:
Simplify Geometries:
One effective method is to simplify the geometries in your shape file. This involves reducing the numb...
- 714 Views
- 3 replies
- 7 kudos
Rename and drop columns with Delta Lake column mapping. Hi all,Now databricks started supporting column rename and drop.Column mapping requires the following Delta protocols:Reader version 2 or above.Writer version 5 or above.Blog URL##Available in D...
- 714 Views
- 3 replies
- 7 kudos
Latest Reply
Above mentioned feature is not working in the DLT pipeline. if the scrip has more than 4 columns
2 More Replies
by
afisl
• New Contributor II
- 2732 Views
- 6 replies
- 2 kudos
Hello,I'm interested in the "Tags" feature of columns/schemas/tables of the UnityCatalog (described here: https://learn.microsoft.com/en-us/azure/databricks/data-governance/unity-catalog/tags)I've been able to play with them by hand and would now lik...
- 2732 Views
- 6 replies
- 2 kudos
Latest Reply
just confirming, that as at March 2024 you can use SQL to set/unset tags on:TablesTable ColumnsViewsBut NOT on View Columnshowever you CAN do this via the UI.
5 More Replies
- 1238 Views
- 4 replies
- 0 kudos
Hello all,We are building a data warehouse on Unity Catalog and we use the SHALLOW CLONE command to allow folks to spin up their own dev environments by light copying the prod tables. We also started using Liquid Clustering on our feature tables, tho...
- 1238 Views
- 4 replies
- 0 kudos
Latest Reply
Thanks Kaniz for your reply. I was able to get it make it work using runtime 14.0.Regards,
3 More Replies
- 996 Views
- 1 replies
- 2 kudos
I am using Databricks asset bundles as an IAC tool with databricks. I want to create a cluster using DAB and then reuse the same cluster in multiple jobs. I can not find an example for this. Whatever examples I found out have all specified individual...
- 996 Views
- 1 replies
- 2 kudos
Latest Reply
Hello,Jobs are specific in Databricks; a job definition also contains the cluster definition because when you run a job, a new cluster is created based on the cluster specification you provided for the job, and it exists only until the job is complet...