- 8038 Views
- 3 replies
- 0 kudos
Resolved! How to pass variables to a python file job
Hi everyone,It's relatively straight forward to pass a value to a key-value pair in notebook job. But for the python file job however, I couldn't figure out how to do it. Does anyone have any idea?Have been tried out different variations for a job wi...
- 8038 Views
- 3 replies
- 0 kudos
- 0 kudos
Thanks so much for this! By the way, is there a way to do it with the JSON interface? I am struggling to get the parameters if entered in this way
- 0 kudos
- 976 Views
- 1 replies
- 0 kudos
Chat Bot with Azure blob and databricks
Hi Team, I am thinking to start a chat bot application for teams to query data from Azure blob and data bricks tables in python programming language.Please help me out on how i can start and which tools i can use for this requirement.Thanks in advanc...
- 976 Views
- 1 replies
- 0 kudos
- 0 kudos
@Nagrjuna , that's a great idea! Although we do not know about your use case completely, I am sure you would definitely fall in love with our AI/ML Products. To create a Python chat bot application that can pull data from Azure Blob Storage and Datab...
- 0 kudos
- 1619 Views
- 1 replies
- 1 kudos
Resolved! Workspace FileNotFoundExecption
I have a model created with catboost and exported in onnx format in workspace and I want to download that model to my local machine.I tried to use the Export that is in the three points to the right of the model, but the model weighs more than 10 Mb ...
- 1619 Views
- 1 replies
- 1 kudos
- 1 kudos
you need put file to FileStorehttps://docs.databricks.com/en/dbfs/filestore.html#save-a-file-to-filestore
- 1 kudos
- 1944 Views
- 1 replies
- 0 kudos
What happened to the ephemeral notebook links????? and the job ids????
Hey Databricks, Why did you remove the ephemeral notebook links and job Ids from the parallel runs? This has created a huge gap for us. We can no longer view the ephemeral notebooks, and also the Jobids are missing from the output. Waccha doing?...
- 1944 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi Kaniz, It's funny you mention these things - we are doing some of those - the problem now is that the JobId is obscured from the output meaning we can't tell which ephemeral notebook goes with which JobId. It looks like the ephemeral notebook ...
- 0 kudos
- 3113 Views
- 5 replies
- 3 kudos
OSError: [Errno 78] Remote address changed
Hello:)as part of deploying an app that previously ran directly on emr to databricks, we are running experiments using LTS 9.1, and getting the following error: PythonException: An exception was thrown from a UDF: 'pyspark.serializers.SerializationEr...
- 3113 Views
- 5 replies
- 3 kudos
- 3 kudos
Hi @liormayn , I can understand. I see the fix went on 20 March 2024, you would have to restart the clusters. Thanks!
- 3 kudos
- 2523 Views
- 0 replies
- 0 kudos
Stream to stream join NullPointerException
I have a DLT pipeline running in continous mode. I have a stream to stream join which runs for the first 5hrs but then fails with a Null Pointer Exception. I need assistance to know what I need to do to handle this. my code is structured as below:@dl...
- 2523 Views
- 0 replies
- 0 kudos
- 3094 Views
- 4 replies
- 2 kudos
Resolved! How to choose a compute, and how to find alternatives for the current compute being used?
We are using a compute for an Interactive Cluster in Production which incurs X amount of cost. We want to know what are the options available to use with near about the same processing power as the current compute but incur a cost of Y, which is less...
- 3094 Views
- 4 replies
- 2 kudos
- 2 kudos
Hello @Ikanip , You can utilize the Databricks Pricing Calculator to estimate costs. For detailed information on compute capacity, please refer to your cloud provider's documentation regarding Virtual Machine instance types.
- 2 kudos
- 1033 Views
- 0 replies
- 0 kudos
Databricks Running Jobs and Terraform
What happens to a currently running job when a workspace is deployed again using Terraform? Are the jobs paused/resumed, or are they left unaffected without any down time? Searching for this specific scenario doesn't seem to come up with anything and...
- 1033 Views
- 0 replies
- 0 kudos
- 581 Views
- 0 replies
- 0 kudos
Archive file support in Jar Type application
In my spark application, I am using set of python libraries. I am submitting spark application as Jar Task. But I am not able to find any option provide Archive Files.So, in order to handle python dependencies, I am using approach:Create archive file...
- 581 Views
- 0 replies
- 0 kudos
- 995 Views
- 0 replies
- 0 kudos
Native Slack Integration
Hi,Are there any plans to build native slack integration? I'm envisioning a one-time connector to Slack that would automatically populate all channels and users to select to use for example when configuring an alert notification. It is does not seem ...
- 995 Views
- 0 replies
- 0 kudos
- 707 Views
- 1 replies
- 0 kudos
Issue with Python Package Management in Spark application
In a pyspark application, I am using set of python libraries. In order to handle python dependencies while running pyspark application, I am using the approach provided by spark : Create archive file of Python virtual environment using required set o...
- 707 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi, I have not tried it but based on the doc you have to go by this approach. ./environment/bin/pythonmust be replaced with the correct path. import os from pyspark.sql import SparkSession os.environ['PYSPARK_PYTHON'] = "./environment/bin/python" sp...
- 0 kudos
- 2083 Views
- 3 replies
- 1 kudos
File not found error when trying to read json file from aws s3 using with open.
I am trying to reading json from aws s3 using with open in databricks notebook using shared cluster.Error message:No such file or directory:'/dbfs/mnt/datalake/input_json_schema.json'In single instance cluster the above error is not found.
- 2083 Views
- 3 replies
- 1 kudos
- 1 kudos
Hi @Nagarathna , I just tried it on a shared cluster and did not face any issue. What is the exact error that you are facing? Complete stacktrace might help. Just to confirm are you accessing the "/dbfs/mnt/datalake/input.json" from the same workspac...
- 1 kudos
- 1049 Views
- 2 replies
- 0 kudos
Can we customize job run name when running azure data bricks notebook jobs from azure data factory
Hi All,we are executing databricks notebook activity inside the child pipeline thru ADF. we are getting child pipeline name in job name while executing databricks job. Is it possible to get master pipeline name as job name or customize job name thr...
- 1049 Views
- 2 replies
- 0 kudos
- 0 kudos
I think we should raise a Request/Product Feedback. Not sure if it would be Databricks that would own it or Microsoft but you may submit feedback for Databricks here - https://docs.databricks.com/en/resources/ideas.html
- 0 kudos
- 2140 Views
- 3 replies
- 1 kudos
Query results in csv file include 'null' string for blank cell
After running a sql script, when downloading the results to a csv file, the file includes a null string for blank cells (see screenshot). Is ther a setting I can change to simply get empty cells instead?
- 2140 Views
- 3 replies
- 1 kudos
- 1 kudos
Hi AlexG, I tested with the table content containing null and with empty data and it works as expected in the download option too. Here is an eg: CREATE TABLE my_table_null_test1 ( id INT, name STRING ); INSERT INTO my_table_null_test1 (id, name)...
- 1 kudos
- 954 Views
- 2 replies
- 0 kudos
FileReadException Error
Hi,I am getting FilereadException Error while reading JSON file using REST API Connector.It comes when data is huge in Json File and it's not able to handle more than 1 Lac records.Error details:org.apache.spark.SparkException: Job aborted due to sta...
- 954 Views
- 2 replies
- 0 kudos
- 0 kudos
Hello @DataBricks_Use1 , It would great if you could add the entire stack trace, as Jose mentioned. But there should be a "Caused by:" section below which would give you an idea of what's the reason for this failure and then you can work on that. fo...
- 0 kudos
Connect with Databricks Users in Your Area
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group-
AI Summit
4 -
Azure
2 -
Azure databricks
2 -
Bi
1 -
Certification
1 -
Certification Voucher
2 -
Community
7 -
Community Edition
3 -
Community Members
1 -
Community Social
1 -
Contest
1 -
Data + AI Summit
1 -
Data Engineering
1 -
Databricks Certification
1 -
Databricks Cluster
1 -
Databricks Community
8 -
Databricks community edition
3 -
Databricks Community Rewards Store
3 -
Databricks Lakehouse Platform
5 -
Databricks notebook
1 -
Databricks Office Hours
1 -
Databricks Runtime
1 -
Databricks SQL
4 -
Databricks-connect
1 -
DBFS
1 -
Dear Community
1 -
Delta
9 -
Delta Live Tables
1 -
Documentation
1 -
Exam
1 -
Featured Member Interview
1 -
HIPAA
1 -
Integration
1 -
LLM
1 -
Machine Learning
1 -
Notebook
1 -
Onboarding Trainings
1 -
Python
2 -
Rest API
10 -
Rewards Store
2 -
Serverless
1 -
Social Group
1 -
Spark
1 -
SQL
8 -
Summit22
1 -
Summit23
5 -
Training
1 -
Unity Catalog
3 -
Version
1 -
VOUCHER
1 -
WAVICLE
1 -
Weekly Release Notes
2 -
weeklyreleasenotesrecap
2 -
Workspace
1
- « Previous
- Next »