- 1489 Views
- 2 replies
- 0 kudos
Cells' outputs getting appended at each run - Databricks Notebook
Hello Community,I have the following issue. When I am running cells from a notebook, I have the print outputs from the previous cells that are appended to the current print output (meaning running cell 1 gives output 1, running cell 2 gives output 1 ...
- 1489 Views
- 2 replies
- 0 kudos
- 0 kudos
This seems to be linked by installing pycaret
- 0 kudos
- 2239 Views
- 0 replies
- 1 kudos
cost finding and optimization
Hi Team,Could you please suggest the best way to track the cost of Databricks objects/components? Could you please share any best practices for optimizing costs and conducting detailed cost analysis?Regards,Phanindra
- 2239 Views
- 0 replies
- 1 kudos
- 2078 Views
- 1 replies
- 0 kudos
Capture changes at the object level in Databricks
Could you please suggest how to capture changes at the object level in Databricks, such as notebooks changes, table DDL changes, view DDL , functions DDL, and workflows etc. changes? We would like to build a dashboard for changes.
- 2078 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi @Phani1 , Good Day! Could you kindly clarify on your question about the capture change? What type of change do you want to capture? If you want to see what modifications the user made to the Notebooks, tables, and workflow, you can check the audi...
- 0 kudos
- 1698 Views
- 1 replies
- 0 kudos
Getting internal server error while creating a new query definition
Hi, I am trying to create a query definition using API '/api/2.0/preview/sql/queries' in postman but getting internal server error. Below is the snap for the snap. Let me know If I am doing anything wrong here.
- 1698 Views
- 1 replies
- 0 kudos
- 0 kudos
I'm interested to know how this error was resolved. I'm getting an "Internal Server Error" returned when trying to create queries with version 1.36.1 of the Databricks Terraform Provider. The error provides no other information.
- 0 kudos
- 7519 Views
- 2 replies
- 0 kudos
Parallel jobs with individual contexts
I was wondering if someone could help us with implementation here. Our current program will spin up 5 jobs through the Databricks API using the same Databricks cluster but each one needs their own spark context (specifically each one will connect to ...
- 7519 Views
- 2 replies
- 0 kudos
- 0 kudos
you can set up buckets with different credentials, endpoints, and so on.https://docs.databricks.com/en/connect/storage/amazon-s3.html#per-bucket-configuration
- 0 kudos
- 2046 Views
- 1 replies
- 0 kudos
Log notebook activities
Hi friends;I'm working on a project where we are 4 programmers. We are working in a single environment, using only the "Workspaces" folder. Each has its own user, which is managed by Azure AD.We had a peak in consumption on the 5th Feb. So I can see ...
- 2046 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi @Retired_mod , thanks for your quick answer.There is no other way to monitor notebook runs. I ask this because adding tags to the cluster and workspace does not solve my problem, considering that everyone uses the same cluster and the same workspa...
- 0 kudos
- 7976 Views
- 0 replies
- 0 kudos
Data Engineer – Databricks - Remote
Data Engineer – Databricks - RemoteApply Here: Job Application for Data Engineer – Databricks at Jenzabar (greenhouse.io)Jenzabar Website: Higher Education Software Solutions - JenzabarFor over four decades, the higher education experts at Jenzabar h...
- 7976 Views
- 0 replies
- 0 kudos
- 7381 Views
- 0 replies
- 0 kudos
Can the community version of databrick run model training examples?
Hi, Newcomer here. I am experimenting with the community version of databrick.I wanted to run the notebook example provided here https://community.cloud.databricks.com/?o=6085264701896358#notebook/2691200955149229It failed because it cannot import th...
- 7381 Views
- 0 replies
- 0 kudos
- 11140 Views
- 0 replies
- 1 kudos
Calling all innovators and visionaries! The 2024 Data Team Awards are open for nominations
Each year, we celebrate the amazing customers that rely on Databricks to innovate and transform their organizations — and the world — with the power of data and AI. The nomination form is now open to submit nominations. Nominations will close on Marc...
- 11140 Views
- 0 replies
- 1 kudos
- 3862 Views
- 4 replies
- 0 kudos
Databricks XML - Bypassing rootTag and rowTag
I see the current conversion of dataframe to xml need to be improved.My dataframe schema is a perfect nested schema based on structs but when I create a xml I have the follow issues:1) I can't add elements to root2) rootTag and rowTag are requiredIn ...
- 3862 Views
- 4 replies
- 0 kudos
- 0 kudos
Here is one of the ways to use the struct field name as rowTag: import org.apache.spark.sql.types._ val schema = new StructType().add("Record", new StructType().add("age", IntegerType).add("name", StringType)) val data = Seq(Row(Row(18, "John ...
- 0 kudos
- 3250 Views
- 3 replies
- 5 kudos
The risks of code execution by default on widget change
Taking from my experience, the default action of widgets triggering code execution upon value change poses risks that outweigh the convenience in certain scenarios. While this feature may seem advantageous in some cases, it can lead to unintended con...
- 3250 Views
- 3 replies
- 5 kudos
- 5 kudos
I definitely have to agree with the original point- if you have a notebook that you import, and you touch any widget value you're running code, most likely accidentally. I'd love to see a workspace or user type option where you can change the default...
- 5 kudos
- 1961 Views
- 2 replies
- 1 kudos
databricks spark XML Writer
Hi.I'm trying to generate XML as output base on my nested dataframe. Everything is ok except by I don't know how to add elements to rootTag.I can add elements from rowtag but not in rootTag. Same problems to add attributes to root <books version = "...
- 1961 Views
- 2 replies
- 1 kudos
- 1 kudos
Hi @RobsonNLPT ,Thanks for bringing up your concerns, always happy to help Can you please refer to the below document to read and write the XML files? https://docs.databricks.com/en/query/formats/xml.html Please let me know if this helps and leave a...
- 1 kudos
- 2731 Views
- 1 replies
- 0 kudos
FileAlreadyExistsException error while analyzing table in Notebook
Databricks experts, I'm new to Databricks, and encounter an issue with the ANALYZE TABLE command in the Notebook. I created two tables nyc_taxi and nyc_taxi2, from one csv file.When executing the following command in Notebook, analyze table nyc_taxi2...
- 2731 Views
- 1 replies
- 0 kudos
- 10308 Views
- 4 replies
- 0 kudos
Running an exe file in databricks
Hello I have an executable file which i want to host and run from databricks. is this possible in databricks using DBFS ?If NOT what are the other ways to it in databricks ?
- 10308 Views
- 4 replies
- 0 kudos
- 0 kudos
Hello, I don't have much information on what kind of executables you would like to run in databricks however, I can think of two solutions : Solution 1: Deploy your code in azure container registry as an image and use the endpoint in data bricks. Sol...
- 0 kudos
- 3131 Views
- 1 replies
- 0 kudos
JWT Encoding error while using Azure secret key
My secret value in Azure key vault is like below.private_key="""-----BEGIN RSA PRIVATE KEY-----********-----END RSA PRIVATE KEY-----"""Running this command in Databricks notebook - jwt.encode(claim_set,private_key,algorithm='RS256')While using the ab...
- 3131 Views
- 1 replies
- 0 kudos
- 0 kudos
Thanks much for your troubleshooting methods.Validated the secret scopes, accessing secrets. These looks fine.Key format - I feel problem is with the key format only. As of now I'm awaiting on Azure subscription access. But I printed the secret value...
- 0 kudos
Join Us as a Local Community Builder!
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now-
.CSV
1 -
Access Data
2 -
Access Databricks
1 -
Access Delta Tables
2 -
Account reset
1 -
ADF Pipeline
1 -
ADLS Gen2 With ABFSS
1 -
Advanced Data Engineering
1 -
AI
1 -
Analytics
1 -
Apache spark
1 -
Apache Spark 3.0
1 -
API Documentation
3 -
Architecture
1 -
asset bundle
1 -
Asset Bundles
2 -
Auto-loader
1 -
Autoloader
4 -
AWS
3 -
AWS security token
1 -
AWSDatabricksCluster
1 -
Azure
5 -
Azure data disk
1 -
Azure databricks
14 -
Azure Databricks SQL
6 -
Azure databricks workspace
1 -
Azure Delta Lake
1 -
Azure Unity Catalog
5 -
Azure-databricks
1 -
AzureDatabricks
1 -
AzureDevopsRepo
1 -
Big Data Solutions
1 -
Billing
1 -
Billing and Cost Management
1 -
Blackduck
1 -
Bronze Layer
1 -
Certification
3 -
Certification Exam
1 -
Certification Voucher
3 -
CICDForDatabricksWorkflows
1 -
Cloud_files_state
1 -
CloudFiles
1 -
Cluster
3 -
Community Edition
3 -
Community Event
1 -
Community Group
2 -
Community Members
1 -
Compute
3 -
Compute Instances
1 -
conditional tasks
1 -
Connection
1 -
Contest
1 -
Credentials
1 -
CustomLibrary
1 -
Data
1 -
Data + AI Summit
1 -
Data Engineering
3 -
Data Explorer
1 -
Data Ingestion & connectivity
1 -
Databrick add-on for Splunk
1 -
databricks
2 -
Databricks Academy
1 -
Databricks AI + Data Summit
1 -
Databricks Alerts
1 -
Databricks Assistant
1 -
Databricks Certification
1 -
Databricks Cluster
2 -
Databricks Clusters
1 -
Databricks Community
10 -
Databricks community edition
3 -
Databricks Community Edition Account
1 -
Databricks Community Rewards Store
3 -
Databricks connect
1 -
Databricks Dashboard
2 -
Databricks delta
2 -
Databricks Delta Table
2 -
Databricks Demo Center
1 -
Databricks Documentation
2 -
Databricks genAI associate
1 -
Databricks JDBC Driver
1 -
Databricks Job
1 -
Databricks Lakehouse Platform
6 -
Databricks Migration
1 -
Databricks notebook
2 -
Databricks Notebooks
3 -
Databricks Platform
2 -
Databricks Pyspark
1 -
Databricks Python Notebook
1 -
Databricks Repo
1 -
Databricks Runtime
1 -
Databricks SQL
5 -
Databricks SQL Alerts
1 -
Databricks SQL Warehouse
1 -
Databricks Terraform
1 -
Databricks UI
1 -
Databricks Unity Catalog
4 -
Databricks Workflow
2 -
Databricks Workflows
2 -
Databricks workspace
3 -
Databricks-connect
1 -
DatabricksJobCluster
1 -
DataDays
1 -
Datagrip
1 -
DataMasking
2 -
DataVersioning
1 -
dbdemos
2 -
DBFS
1 -
DBRuntime
1 -
DBSQL
1 -
DDL
1 -
Dear Community
1 -
deduplication
1 -
Delt Lake
1 -
Delta
22 -
Delta Live Pipeline
3 -
Delta Live Table
5 -
Delta Live Table Pipeline
5 -
Delta Live Table Pipelines
4 -
Delta Live Tables
7 -
Delta Sharing
2 -
deltaSharing
1 -
Deny assignment
1 -
Development
1 -
Devops
1 -
DLT
10 -
DLT Pipeline
7 -
DLT Pipelines
5 -
Dolly
1 -
Download files
1 -
Dynamic Variables
1 -
Engineering With Databricks
1 -
env
1 -
ETL Pipelines
1 -
External Sources
1 -
External Storage
2 -
FAQ for Databricks Learning Festival
2 -
Feature Store
2 -
Filenotfoundexception
1 -
Free trial
1 -
GCP Databricks
1 -
GenAI
1 -
Getting started
2 -
Google Bigquery
1 -
HIPAA
1 -
import
1 -
Integration
1 -
JDBC Connections
1 -
JDBC Connector
1 -
Job Task
1 -
Lineage
1 -
LLM
1 -
Login
1 -
Login Account
1 -
Machine Learning
2 -
MachineLearning
1 -
Materialized Tables
2 -
Medallion Architecture
1 -
Migration
1 -
ML Model
2 -
MlFlow
2 -
Model Training
1 -
Module
1 -
Networking
1 -
Notebook
1 -
Onboarding Trainings
1 -
Pandas udf
1 -
Permissions
1 -
personalcompute
1 -
Pipeline
2 -
Plotly
1 -
PostgresSQL
1 -
Pricing
1 -
Pyspark
1 -
Python
5 -
Python Code
1 -
Python Wheel
1 -
Quickstart
1 -
Read data
1 -
Repos Support
1 -
Reset
1 -
Rewards Store
2 -
Schedule
1 -
Serverless
3 -
Session
1 -
Sign Up Issues
2 -
Spark
3 -
Spark Connect
1 -
sparkui
2 -
Splunk
2 -
SQL
8 -
Summit23
7 -
Support Tickets
1 -
Sydney
2 -
Table Download
1 -
Tags
1 -
Training
2 -
Troubleshooting
1 -
Unity Catalog
4 -
Unity Catalog Metastore
2 -
Update
1 -
user groups
1 -
Venicold
3 -
Voucher Not Recieved
1 -
Watermark
1 -
Weekly Documentation Update
1 -
Weekly Release Notes
2 -
Women
1 -
Workflow
2 -
Workspace
3
- « Previous
- Next »
User | Count |
---|---|
133 | |
97 | |
52 | |
42 | |
30 |