- 1118 Views
- 2 replies
- 0 kudos
I'm using Azure Databricks notebook to read a excel file from a folder inside a mounted Azure blob storage. The mounted excel location is like : "/mnt/2023-project/dashboard/ext/Marks.xlsx". 2023-project is the mount point and dashboard is the name o...
- 1118 Views
- 2 replies
- 0 kudos
Latest Reply
Hi @vichus1995​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Thanks!
1 More Replies
by
Jits
• New Contributor II
- 525 Views
- 2 replies
- 3 kudos
Hi All,I am creating table using Databricks SQL editor. The table definition isDROP TABLE IF EXISTS [database].***_test;CREATE TABLE [database].***_jitu_test( id bigint)USING deltaLOCATION 'test/raw/***_jitu_test'TBLPROPERTIES ('delta.minReaderVersi...
- 525 Views
- 2 replies
- 3 kudos
Latest Reply
Hi @jitendra goswami​ We haven't heard from you since the last response from @Werner Stinckens​ r​, and I was checking back to see if her suggestions helped you.Or else, If you have any solution, please share it with the community, as it can be helpf...
1 More Replies
- 1079 Views
- 2 replies
- 1 kudos
in dbx community edition, the autoloader works using the s3 mount. s3 mount, autoloader:dbutils.fs.mount(f"s3a://{access_key}:{encoded_secret_key}@{aws_bucket_name}", f"/mnt/{mount_name}from pyspark.sql import SparkSession
from pyspark.sql.functions ...
- 1079 Views
- 2 replies
- 1 kudos
Latest Reply
Hi @Joe Gorse​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers you...
1 More Replies
by
signo
• New Contributor II
- 1131 Views
- 3 replies
- 2 kudos
Databricks Runtime: 12.2 LTS, Spark: 3.3.2, Delta Lake: 2.2.0A target table with schema ([c1: integer, c2: integer]), allows us to write into target table using data with schema ([c1: integer, c2: double]). I expected it to throw an exception (same a...
- 1131 Views
- 3 replies
- 2 kudos
Latest Reply
Hi @Sigrun Nordli​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers...
2 More Replies
by
kll
• New Contributor III
- 2710 Views
- 2 replies
- 3 kudos
I am attempting to apply a function to a pyspark DataFrame and save the API response to a new column and then parse using `json_normalize`. This works fine in pandas, however, I run into an exception with `pyspark`. import pyspark.pandas as ps
i...
- 2710 Views
- 2 replies
- 3 kudos
Latest Reply
Hi @Keval Shah​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers yo...
1 More Replies
- 1343 Views
- 2 replies
- 2 kudos
I have a python script running as databricks job. Is there a way I can run this job with different set of parameters automatically or programmatically without using run with different parameter option available in UI ?
- 1343 Views
- 2 replies
- 2 kudos
Latest Reply
Hi @Divya Bhadauria​ We haven't heard from you since the last response from @Lakshay Goel​ ​, and I was checking back to see if her suggestions helped you.Or else, If you have any solution, please share it with the community, as it can be helpful to ...
1 More Replies
- 1214 Views
- 0 replies
- 0 kudos
Why / When should we choose Spark on Databricks over Snowpark if the data we are processing is underlying in Snowflake?
- 1214 Views
- 0 replies
- 0 kudos
- 611 Views
- 0 replies
- 0 kudos
Hi Team,Our cluster is currently in DBR 12.1 but it spins up a VMs with Ubuntu 18.04 LTS. 18.04 will be EOL soon. According to this https://learn.microsoft.com/en-us/azure/databricks/release-notes/runtime/12.1 OS version should be 20.04 and now a bit...
- 611 Views
- 0 replies
- 0 kudos
by
rk_db
• New Contributor II
- 374 Views
- 0 replies
- 1 kudos
We have a use case of implementing custom logging framework for capturing the various details while executing a notebook. At the end of the notebook, the log file gets copied from driver's /tmp folder into Data lake Storage's container.While this pro...
- 374 Views
- 0 replies
- 1 kudos
- 1508 Views
- 1 replies
- 2 kudos
Hello,I am somewhat new to Databricks and am trying to build a Q&A application based on a collection of documents. I need to move .pdf and .docx files from my local machine to storage in Databricks and eventually a document store. My questions are:Wh...
- 1508 Views
- 1 replies
- 2 kudos
Latest Reply
Hi all,I took an initial stab at task one with some success using the Databricks CLI. Here are the steps below:Open Command/Anaconda prompt and enter: pip install databricks-cliGo to your Databricks console and under settings find "User Settings" and...
- 897 Views
- 3 replies
- 1 kudos
I am using DLT to load csv in ADLS, below is my sql query in notebook:CREATE OR REFRESH STREAMING LIVE TABLE test_account_raw
AS SELECT * FROM cloud_files(
"abfss://my_container@my_storageaccount.dfs.core.windows.net/test_csv/",
"csv",
map("h...
- 897 Views
- 3 replies
- 1 kudos
Latest Reply
thank you every one, the problem is resolved, problem is gone when I have workspace admin access.
2 More Replies
- 3506 Views
- 3 replies
- 3 kudos
I am using below code to create and read widgets. I am assigning default value.dbutils.widgets.text("pname", "default","parameter_name")pname=dbutils.widgets.get("pname")I am using this widget parameter in some sql scripts. one example is given below...
- 3506 Views
- 3 replies
- 3 kudos
Latest Reply
Hi @Ashwathy P P​ , Which Databricks Runtime are you using?A known issue is that a widget state may not be adequately clear after pressing Run All, even after clearing or removing the widget in the code. If this happens, you will see a discrepancy be...
2 More Replies
by
BamBam
• New Contributor II
- 1007 Views
- 0 replies
- 0 kudos
I have STRING column in a DLT table that was loaded using SQL Autoloader via a JSON file. When I use the "schema_of_json" function in a SQL statement passing in the literal string from the STRING column then I get this output:ARRAY<STRUCT<firstFetchD...
- 1007 Views
- 0 replies
- 0 kudos
- 749 Views
- 2 replies
- 0 kudos
Databricks recently introduced extension inside the VS code, this is good feature but my company has some security concerns, If I wanted to block connecting to Databricks from Visual studio code, How can I do it? Is there any process which blocks con...
- 749 Views
- 2 replies
- 0 kudos
Latest Reply
Hi @Kiran Gogula​ We haven't heard from you since the last response from @Debayan Mukherjee​ ​, and I was checking back to see if her suggestions helped you.Or else, If you have any solution, please share it with the community, as it can be helpful t...
1 More Replies
by
BWong
• New Contributor III
- 2449 Views
- 2 replies
- 1 kudos
Hi allI have a table created by DLT. Initially I specified cloudFiles.inferColumnTypes to false and all columns are stored as strings. However, I now want to use cloudFiles.inferColumnTypes=true. I dropped the table and re-ran the pipeline, which fai...
- 2449 Views
- 2 replies
- 1 kudos
Latest Reply
Hi @Billy Wong​ Thank you for posting your question in our community! We are happy to assist you.To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers yo...
1 More Replies