cancel
Showing results for 
Search instead for 
Did you mean: 
Community Discussions
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

tim-mcwilliams
by New Contributor
  • 6 Views
  • 0 replies
  • 0 kudos

Notebook cell gets hung up but code completes

Have been running into an issue when running a pymc-marketing model in a Databricks notebook. The cell that fits the model gets hung up and the progress bar stops moving, however the code completes and dumps all needed output into a folder. After the...

  • 6 Views
  • 0 replies
  • 0 kudos
GeKo
by New Contributor II
  • 503 Views
  • 3 replies
  • 0 kudos

column "storage_sub_directory" is now always NULL in system.information_schema.tables

Hello,I am running a job that depends on the information provided in column storage_sub_directory in system.information_schema.tables .... and it worked until 1-2 weeks ago.Now I discovered in the doc that this column is deprecated and always null , ...

Community Discussions
Unity Catalog
unitycatalog
  • 503 Views
  • 3 replies
  • 0 kudos
Latest Reply
NandiniN
Valued Contributor II
  • 0 kudos

Hello, Linking the documentation - https://docs.databricks.com/en/sql/language-manual/information-schema/tables.html#definition STORAGE_SUB_DIRECTORY STRING Yes No Deprecated. Always NULL.

  • 0 kudos
2 More Replies
Abhay_1002
by Visitor
  • 46 Views
  • 1 replies
  • 0 kudos

Issue with Python Package Management in Spark application

In a pyspark application, I am using set of python libraries. In order to handle python dependencies while running pyspark application, I am using the approach provided by spark : Create archive file of Python virtual environment using required set o...

  • 46 Views
  • 1 replies
  • 0 kudos
Latest Reply
NandiniN
Valued Contributor II
  • 0 kudos

Hi, I have not tried it but based on the doc you have to go by this approach. ./environment/bin/pythonmust be replaced with the correct path. import os from pyspark.sql import SparkSession os.environ['PYSPARK_PYTHON'] = "./environment/bin/python" sp...

  • 0 kudos
Nagarathna
by New Contributor II
  • 128 Views
  • 3 replies
  • 1 kudos

File not found error when trying to read json file from aws s3 using with open.

I am trying to reading json from aws s3 using with open in databricks notebook using shared cluster.Error message:No such file or directory:'/dbfs/mnt/datalake/input_json_schema.json'In single instance cluster the above error is not found.  

  • 128 Views
  • 3 replies
  • 1 kudos
Latest Reply
NandiniN
Valued Contributor II
  • 1 kudos

Hi @Nagarathna , I just tried it on a shared cluster and did not face any issue. What is the exact error that you are facing? Complete stacktrace might help. Just to confirm are you accessing the "/dbfs/mnt/datalake/input.json" from the same workspac...

  • 1 kudos
2 More Replies
liormayn
by New Contributor II
  • 474 Views
  • 3 replies
  • 3 kudos

OSError: [Errno 78] Remote address changed

Hello:)as part of deploying an app that previously ran directly on emr to databricks, we are running experiments using LTS 9.1, and getting the following error: PythonException: An exception was thrown from a UDF: 'pyspark.serializers.SerializationEr...

  • 474 Views
  • 3 replies
  • 3 kudos
Latest Reply
NandiniN
Valued Contributor II
  • 3 kudos

Hi @liormayn ,  Are you still facing the issue? This was faced in mid March and issue was fixed. It can happen for some pip install when the libraries are in Workspace. But if you are still facing the issue, I would suggest you to create a support ti...

  • 3 kudos
2 More Replies
databricksdev
by New Contributor II
  • 138 Views
  • 2 replies
  • 0 kudos

Can we customize job run name when running azure data bricks notebook jobs from azure data factory

Hi All,we are executing databricks notebook activity  inside the child pipeline thru ADF. we are getting child pipeline name in job name while executing databricks job.  Is it possible to get master pipeline name as job name or customize job name thr...

  • 138 Views
  • 2 replies
  • 0 kudos
Latest Reply
NandiniN
Valued Contributor II
  • 0 kudos

I think we should raise a Request/Product Feedback. Not sure if it would be Databricks that would own it or Microsoft but you may submit feedback for Databricks here - https://docs.databricks.com/en/resources/ideas.html  

  • 0 kudos
1 More Replies
MOUNIKASIMHADRI
by New Contributor
  • 227 Views
  • 2 replies
  • 1 kudos

Insufficient Permissions Issue on Databricks

I have encountered a technical issue on Databricks.While executing commands both in Spark and SQL within the Databricks environment, I’ve run into permission-related errors from selecting files from DBFS. "org.apache.spark.SparkSecurityException: [IN...

  • 227 Views
  • 2 replies
  • 1 kudos
Latest Reply
NandiniN
Valued Contributor II
  • 1 kudos

Hi @MOUNIKASIMHADRI ,   Workspace admins get ANY FILE granted by default. They can explicitly grant it to non-admin users. Hence as suggested in the kb,  GRANT SELECT ON ANY FILE TO `<user@domain-name>`  

  • 1 kudos
1 More Replies
dbx_687_3__1b3Q
by New Contributor III
  • 185 Views
  • 2 replies
  • 0 kudos

Impersonating a user

How do I impersonate a user? I can't find any documentation that explains how to do this or even hint that it's possible.Use case: I perform administrative tasks like assign grants and roles to catalogs, schemas, and tables for the benefit of busines...

  • 185 Views
  • 2 replies
  • 0 kudos
Latest Reply
NandiniN
Valued Contributor II
  • 0 kudos

Hidbx_687_3__1b3Q, Actually, I have seen impersonation, is this something that you are looking for? https://docs.gcp.databricks.com/en/dev-tools/google-id-auth.html#step-5-impersonate-the-google-cloud-service-account

  • 0 kudos
1 More Replies
AlexG
by New Contributor II
  • 414 Views
  • 3 replies
  • 1 kudos

Query results in csv file include 'null' string for blank cell

After running a sql script, when downloading the results to a csv file, the file includes a null string for blank cells (see screenshot). Is ther a setting I can change to simply get empty cells instead? 

AlexG_1-1702927614092.png
  • 414 Views
  • 3 replies
  • 1 kudos
Latest Reply
NandiniN
Valued Contributor II
  • 1 kudos

Hi AlexG, I tested with the table content containing null and with empty data and it works as expected in the download option too. Here is an eg: CREATE TABLE my_table_null_test1 ( id INT, name STRING ); INSERT INTO my_table_null_test1 (id, name)...

  • 1 kudos
2 More Replies
DataBricks_Use1
by New Contributor
  • 45 Views
  • 2 replies
  • 0 kudos

FileReadException Error

Hi,I am getting FilereadException Error while reading JSON file using REST API Connector.It comes when data is huge in Json File and it's not able to handle more than 1 Lac records.Error details:org.apache.spark.SparkException: Job aborted due to sta...

  • 45 Views
  • 2 replies
  • 0 kudos
Latest Reply
NandiniN
Valued Contributor II
  • 0 kudos

Hello @DataBricks_Use1 , It would great if you could add the entire stack trace, as Jose mentioned. But there should be a "Caused by:" section below which would give you an idea of what's the reason for this failure and then you can work on that.  fo...

  • 0 kudos
1 More Replies
Phani1
by Valued Contributor
  • 94 Views
  • 1 replies
  • 0 kudos

temporary tables or dataframes,

We have to generate over 70 intermediate tables. Should we use temporary tables or dataframes, or should we create delta tables and truncate and reload? Having too many temporary tables could lead to memory problems. In this situation, what is the mo...

  • 94 Views
  • 1 replies
  • 0 kudos
Latest Reply
NandiniN
Valued Contributor II
  • 0 kudos

Hi Phani1, It would be a use case specific answer, so if it is possible I would suggest to work with the Solution Architect on this or share some more insights for a better guidance. When I say that, I just would want to understand would we really ne...

  • 0 kudos
DavidKxx
by New Contributor III
  • 50 Views
  • 1 replies
  • 0 kudos

Can't create branch of public git repo

Hi,I have cloned a public git repo into my Databricks account.  It's a repo associated with an online training course.  I'd like to work through the notebooks, maybe make some changes and updates, etc., but I'd also like to keep a clean copy of it. M...

  • 50 Views
  • 1 replies
  • 0 kudos
Latest Reply
NandiniN
Valued Contributor II
  • 0 kudos

Hi DavidKxx, You can clone public remote repositories without Git credentials (a personal access token and a username). To modify a public remote repository or to clone or modify a private remote repository, you must have a Git provider username and...

  • 0 kudos
Koa
by New Contributor
  • 72 Views
  • 1 replies
  • 0 kudos

Databrick Dashboard state not cleared when login as other user.

Hi all, I am using Databricks and created a notebook and would like to run in Dashboard. It works correctly. I share the Dashboard with another user UserA with "Can Run" permission  When I login as a UserA and login and accesses Dashboard then does a...

  • 72 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Koa, You’ve encountered a security concern related to Databricks and handling JWT tokens within notebooks.  Dashboard State Persistence: When you share a dashboard with another user (in this case, UserA), any updates made by that user will re...

  • 0 kudos