cancel
Showing results for 
Search instead for 
Did you mean: 
Community Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Simon_T
by New Contributor III
  • 441 Views
  • 3 replies
  • 1 kudos

Resolved! Databricks Bundle Error

I am running this command: databricks bundle deploy --profile DAVE2_Dev --debug And I am getting this error: 10:13:28 DEBUG open dir C:\Users\teffs.THEAA\OneDrive - AA Ltd\Databricks\my_project\dist: open C:\Users\teffs.THEAA\OneDrive - AA Ltd\Databr...

  • 441 Views
  • 3 replies
  • 1 kudos
Latest Reply
Simon_T
New Contributor III
  • 1 kudos

So I found a link to a page that said that the databricks bundle command is expecting python3.exe instead of python.exe. So I took a copy of python.exe and renamed it to python3.exe and that seems to work. Thanks for investigating though.

  • 1 kudos
2 More Replies
InquisitiveGeek
by New Contributor II
  • 108 Views
  • 3 replies
  • 0 kudos

how can I store my cell output as a text file in my local drive?

I want to store the output of my cell as a text file in my local hard drive.I'm getting the json output and I need that json in my local drive as a text file. 

  • 108 Views
  • 3 replies
  • 0 kudos
Latest Reply
Slash
New Contributor II
  • 0 kudos

Hi @InquisitiveGeek ,You can do this following below approach: https://learn.microsoft.com/en-us/azure/databricks/notebooks/notebook-outputs#download-results

  • 0 kudos
2 More Replies
himanmon
by New Contributor II
  • 113 Views
  • 2 replies
  • 1 kudos

Can I move a single file larger than 100GB using dbtuils fs?

Hello. I have a file over 100GB. Sometimes this is on the cluster's local path, and sometimes it's on the volume.And I want to send this to another path on the volume, or to the s3 bucket. dbutils.fs.cp('file:///tmp/test.txt', '/Volumes/catalog/schem...

himanmon_0-1721181332995.png himanmon_1-1721185042042.png
  • 113 Views
  • 2 replies
  • 1 kudos
Latest Reply
Slash
New Contributor II
  • 1 kudos

Hi @himanmon ,This is caused because of S3 limit on segment count. The part files can be numbered only from 1 to 10000After Setting spark.hadoop.fs.s3a.multipart.size to 104857600. , did you RESTART the cluster? Because it'll only work when the clust...

  • 1 kudos
1 More Replies
yurib
by New Contributor II
  • 187 Views
  • 2 replies
  • 0 kudos

Resolved! error creating token when creating databricks_mws_workspace resource on GCP

 resource "databricks_mws_workspaces" "this" { depends_on = [ databricks_mws_networks.this ] provider = databricks.account account_id = var.databricks_account_id workspace_name = "${local.prefix}-dbx-ws" location = var.google_region clou...

  • 187 Views
  • 2 replies
  • 0 kudos
Latest Reply
yurib
New Contributor II
  • 0 kudos

my issue was caused be credentials in `~/.databrickscfg` (generated by databricks cli) taking precedence over the creds set by `gcloud auth application-default login`. google's application default creds should be used when using the databricks google...

  • 0 kudos
1 More Replies
FaizH
by New Contributor II
  • 210 Views
  • 2 replies
  • 1 kudos

Resolved! Error - Data Masking

Hi,I was testing masking functionality of databricks and got the below error:java.util.concurrent.ExecutionException: com.databricks.sql.managedcatalog.acl.UnauthorizedAccessException:PERMISSION_DENIED: Query on table dev_retransform.uc_lineage.test_...

FaizH_0-1721187898699.png
  • 210 Views
  • 2 replies
  • 1 kudos
Latest Reply
Slash
New Contributor II
  • 1 kudos

Hi @FaizH , Are you using single user compute by any chance? Because of you do there is following limitation:Single-user compute limitationDo not add row filters or column masks to any table that you are accessing from a single-user cluster. During t...

  • 1 kudos
1 More Replies
thilanka02
by New Contributor II
  • 921 Views
  • 3 replies
  • 1 kudos

Resolved! Spark read CSV does not throw Exception if the file path is not available in Databricks 14.3

We were using this method and this was working as expected in Databricks 13.3.  def read_file(): try: df_temp_dlr_kpi = spark.read.load(raw_path,format="csv", schema=kpi_schema) return df_temp_dlr_kpi except Exce...

Screenshot 2024-04-19 at 13.29.19.png
  • 921 Views
  • 3 replies
  • 1 kudos
Latest Reply
databricks100
New Contributor II
  • 1 kudos

Hi, has this been resolved? I am still seeing this issue with Runtime 14.3 LTSThanks in advance.

  • 1 kudos
2 More Replies
Rash
by New Contributor
  • 93 Views
  • 1 replies
  • 0 kudos

Databricks feature

would like to know if databricks supports write back feature via altreyx to databricks.

  • 93 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @Rash, Alteryx does support write-back to Databricks. To achieve this, you can use the Data Stream In tool in Alteryx, which allows you to write data to Databricks. The write support is facilitated via the Databricks Bulk Loader.

  • 0 kudos
HelloME
by New Contributor
  • 112 Views
  • 1 replies
  • 0 kudos

Databricks Asset Bundles - terraform.tfstate not matching when using databricks bundle deploy

Hello I have notice something strange with the asset bundle deployments via the CLI toolI am trying to run databricks bundle deploy and i'm getting and error saying the job ID doesn't exist or I don't have access to it. Error: cannot read job: User h...

  • 112 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @HelloME, Consider updating your Databricks CLI to the latest version.

  • 0 kudos
Mrinal16
by New Contributor
  • 100 Views
  • 1 replies
  • 0 kudos

Connection with virtual machine

I have to upload files from Azure container to Virtual machine using Databricks. I have mounted my files to Databricks.Please help me do it. If any idea about this.

  • 100 Views
  • 1 replies
  • 0 kudos
Latest Reply
imsabarinath
New Contributor II
  • 0 kudos

I am not sure if you need to further curate the data before you upload it to Virtual machine, if you not you can just mount storage on VMCreate an SMB Azure file share and connect it to a Windows VM | Microsoft LearnAzure Storage - Create File Storag...

  • 0 kudos
johnp
by New Contributor III
  • 159 Views
  • 2 replies
  • 1 kudos

Resolved! Access to "Admin Console" and "System Tables"

I am the contributor and owner of my databricks workspace. After a recent spike of expense, I want to check the billing details of my Azure databricks usage. (i.e per cluster, per VM, etc).  Databricks provides these information thorough "Admin Conso...

  • 159 Views
  • 2 replies
  • 1 kudos
Latest Reply
Rishabh_Tiwari
Community Manager
  • 1 kudos

Hi @johnp , Thank you for reaching out to our community! We're here to help you.  To ensure we provide you with the best support, could you please take a moment to review the response and choose the one that best answers your question? Your feedback...

  • 1 kudos
1 More Replies
lbdatauser
by New Contributor II
  • 88 Views
  • 1 replies
  • 0 kudos

Liquid clustering with incremental ingestion

We ingest data incrementally from a database into delta tables using a column updatedUtc. This column is a datetime and is updated when the row in the database table changes. What about using this non-mutable column in "cluster by"? Would it require ...

  • 88 Views
  • 1 replies
  • 0 kudos
Latest Reply
greyamber
New Contributor II
  • 0 kudos

It recommended to run optimize query in scheduled manner https://docs.databricks.com/en/delta/clustering.html#how-to-trigger-clustering 

  • 0 kudos
haritashva31
by New Contributor
  • 300 Views
  • 2 replies
  • 0 kudos

50%-off Databricks certification voucher

Hello Databricks Community Team, I am reaching out to inquire about the Databricks certification voucher promotion for completing the Databricks Learning Festival (Virtual) courses.I completed one of the Databricks Learning Festival courses July 2024...

  • 300 Views
  • 2 replies
  • 0 kudos
Latest Reply
J_Anderson
New Contributor III
  • 0 kudos

Thanks for participating in the learning festival! We will be distributing the coupons via email after the event has concluded. You can expect to receive the email by early August

  • 0 kudos
1 More Replies
AyushPandey
by New Contributor II
  • 1579 Views
  • 6 replies
  • 0 kudos

Unable to reactive an inactive user

Hi all,I am facing an issue with reactivating an inactive user i tried the following json with databricks cli run_update = {  "schemas": [ "urn:ietf:params:scim:api:messages:2.0:PatchOp" ],  "Operations": [    {      "op": "replace",      "path": "ac...

  • 1579 Views
  • 6 replies
  • 0 kudos
Latest Reply
Jürg
New Contributor II
  • 0 kudos

@AyushPandey @FunkyBunches just found a dirty workaround for inactive users.1. delete the user on the workspace2. delete the user on the account console3. add the user on the account console4. add the user on the workspace againif it's a user without...

  • 0 kudos
5 More Replies
pjv
by New Contributor III
  • 116 Views
  • 2 replies
  • 0 kudos

Setting up a proxy for python notebook

Hi all,I am running a python notebook with a web scraper. I want to setup a proxy server that I can use to avoid any IP bans when scraping. Can someone recommend a way to setup a proxy server that can be used from HTTP requests send from a Databricks...

  • 116 Views
  • 2 replies
  • 0 kudos
Latest Reply
pjv
New Contributor III
  • 0 kudos

Hi Kaniz,Thanks for the reply. I know how to include HTTP proxies in my python code and redirect the requests. However, I wondered if Databricks has any functionality to setup the proxies?Thank you

  • 0 kudos
1 More Replies
Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!

Top Kudoed Authors