- 21401 Views
- 3 replies
- 4 kudos
Training @ Data & AI World Tour 2023
Join your peers at the Data + AI World Tour 2023! Explore the latest advancements, hear real-world case studies and discover best practices that deliver data and AI transformation. From the Databricks Lakehouse Platform to open source technologies in...
- 21401 Views
- 3 replies
- 4 kudos
- 4 kudos
Introducing Mini Flush: Your Ticket to Ultimate Casino Thrills!Are you ready to embark on an electrifying journey into the world of online gambling? If so, look no further than Vijaybet Online Casino! Our state-of-the-art platform is your gateway to ...
- 4 kudos
- 1635 Views
- 1 replies
- 1 kudos
Resolved! Problem creating external delta table on non-AWS s3 bucket
I am testing Databricks with non-AWS S3 object storage. I can access the non-AWS S3 bucket by setting these parameters:sc._jsc.hadoopConfiguration().set("fs.s3a.access.key", "XXXXXXXXXXXXXXXXXXXX")sc._jsc.hadoopConfiguration().set("fs.s3a.secret.key...
- 1635 Views
- 1 replies
- 1 kudos
- 1 kudos
Found the solution to disable it. Can close this question.
- 1 kudos
- 28102 Views
- 0 replies
- 3 kudos
Schema owned by Service Principal shows error in PBI
Background info:1. We have unity catalog enabled. 2. All of our jobs are run by Service Principal that has all necessary access it needs.Issue:One of the jobs checks existing schemas against the ones it is supposed to create in that given run and if ...
- 28102 Views
- 0 replies
- 3 kudos
- 1280 Views
- 0 replies
- 0 kudos
Suggestions for python its not working
Tab and Shift tab its not suggestin nothing in python codeTab only work as indentation and shift + tab do nothing.What can I do?
- 1280 Views
- 0 replies
- 0 kudos
- 7243 Views
- 0 replies
- 0 kudos
How to retrieve a Job Name from the SparkContext
We are currently starting to build certain data pipelines using Databricks.For this we use Jobs and the steps in these Jobs are implemented in Python Wheels.We are able to retrieve the Job ID, Job Run ID and Task Run Id in our Python Wheels from the ...
- 7243 Views
- 0 replies
- 0 kudos
- 5112 Views
- 6 replies
- 1 kudos
Long run time with %run command
My team has started to see long run times on cells when using the %run commands to run another notebook. The notebook that we are calling with %run only contains variable setting, defining functions, and library imports. In some cases I have seen in ...
- 5112 Views
- 6 replies
- 1 kudos
- 1 kudos
- 1 kudos
- 998 Views
- 0 replies
- 0 kudos
case statements return same value
I have these 4 case statements count(*) as Total_claim_reciepts,count(case when claim_id like '%M%' and receipt_flag = 1 and is_firstpassclaim = 1 then 0 else claim_id end) as Total_claim_reciepts,count(case when claim_status ='DENIED' and claim_repa...
- 998 Views
- 0 replies
- 0 kudos
- 8717 Views
- 2 replies
- 0 kudos
NATIVE_XML_DATA_SOURCE_NOT_ENABLED
I'm trying to read an xml file and receiving the following error. I've installed the maven library spark xml to the cluster, however I'm receiving the error. is there anything i'm missing?ErrorAnalysisException: [NATIVE_XML_DATA_SOURCE_NOT_ENABLED] N...
- 8717 Views
- 2 replies
- 0 kudos
- 0 kudos
i've tried already spark.read.format('com.databricks.spark.xml'). it receives the same error.
- 0 kudos
- 1852 Views
- 0 replies
- 0 kudos
Do we need to Request Databrikcs to Enable MOSIAC ML
HI Team,I am not seeing any specific articles/guides to use MOSIAC ML on Databricks. After Acquiring MOSIAC ML does anything got changed in terms of MOSIAC ML Use or just use just regular function
- 1852 Views
- 0 replies
- 0 kudos
- 2106 Views
- 3 replies
- 0 kudos
Databricks Delta table Insert Data Error
When trying to insert data into the Delta table in databricks, an error occurs as shown below. [TASK_WRITE_FAILED] Task failed while writing rows to abfss://cont-01@dlsgolfzon001.dfs.core.windows.net/dir-db999_test/D_RGN_INFO_TMP.In SQL, the results ...
- 2106 Views
- 3 replies
- 0 kudos
- 0 kudos
seems ok to me, have you tried to display the data from table A and also the B/C join?
- 0 kudos
- 1093 Views
- 1 replies
- 0 kudos
how to make distributed predictions with sklearn model?
So I have a sklearn style model which predicts on a pandas df. The data to predict on is a spark df. Simply converting the whole thing at once to pandas and predicting is not an option due to time and memory constraints.Is there a way to chunk a spar...
- 1093 Views
- 1 replies
- 0 kudos
- 0 kudos
- 0 kudos
- 2526 Views
- 1 replies
- 0 kudos
problem with workspace after metastore deleted
I am completely new to Databricks AWS and start working on it a week ago. Pls excuse me if I ask or did something silly.I created a workspace and a single node cluster for testing. A metastore was created from Databricks quickstart and it was automa...
- 2526 Views
- 1 replies
- 0 kudos
- 0 kudos
I restarted the compute node and this problem went away.ErrorClass=METASTORE_DOES_NOT_EXIST] Metastore 'b11fb1a0-a462-4dfb-b91b-e0795fde10b0' does not exist.New question: I am testing Databricks with non-AWS S3 object storage. I can access the non-A...
- 0 kudos
- 2527 Views
- 3 replies
- 1 kudos
drop duplicates within watermark
Recently we are using structured streaming to ingest data. We want to use watermark to drop duplicated event. But We encountered some wired behavior and unexpected exception. Anyone can help me to explain what is the expected behavior and how should ...
- 2527 Views
- 3 replies
- 1 kudos
- 1 kudos
Any maintainer can help me on this question??
- 1 kudos
- 3580 Views
- 2 replies
- 1 kudos
Resolved! Read zstd file from Databricks
I just started to read `zstd` compressed file in Databricks on Azure, Runtime 14.1 on Spark 3.5.0I've set PySpark commands as followspath = f"wasbs://{container}@{storageaccount}.blob.core.windows.net/test-zstd" schema = "some schema" df = spark.read...
- 3580 Views
- 2 replies
- 1 kudos
- 1 kudos
The available compression types are format dependent.For json, zstd is not (yet) available, whereas for parquet it is.
- 1 kudos
- 3053 Views
- 1 replies
- 1 kudos
How to schedule/refresh databricks alerts using REST API?
Hi, I am deploying Databricks SQL alerts using REST API. But I can't seem to figure out how to schedule their refresh task.I went through the documentation it says "Alerts can be scheduled using the sql_task type of the Jobs API, e.g. Jobs/Create"How...
- 3053 Views
- 1 replies
- 1 kudos
- 1 kudos
What they mention in the API docs is that you can create a job with sql_task of type Alert. To make it easier you can try creating the job first in the UI first and downloading the JSON config. Here is an example with the main parameters that should ...
- 1 kudos
Connect with Databricks Users in Your Area
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group-
12.2 LST
1 -
Access Data
2 -
Access Delta Tables
2 -
Account reset
1 -
ADF Pipeline
1 -
ADLS Gen2 With ABFSS
1 -
Analytics
1 -
Apache spark
1 -
API
2 -
API Documentation
2 -
Architecture
1 -
Auto-loader
1 -
Autoloader
2 -
AWS
3 -
AWS security token
1 -
AWSDatabricksCluster
1 -
Azure
2 -
Azure data disk
1 -
Azure databricks
10 -
Azure Databricks SQL
5 -
Azure databricks workspace
1 -
Azure Unity Catalog
4 -
Azure-databricks
1 -
AzureDatabricks
1 -
AzureDevopsRepo
1 -
Best Practices
1 -
Big Data Solutions
1 -
Billing
1 -
Billing and Cost Management
1 -
Bronze Layer
1 -
Bug
1 -
Catalog
1 -
Certification
1 -
Certification Exam
1 -
Certification Voucher
1 -
CICD
2 -
cleanroom
1 -
Cli
1 -
Cloud_files_state
1 -
cloudera sql
1 -
CloudFiles
1 -
Cluster
3 -
clusterpolicy
1 -
Code
1 -
Community Group
1 -
Community Social
1 -
Compute
2 -
conditional tasks
1 -
Connection
1 -
Cost
2 -
Credentials
1 -
CustomLibrary
1 -
CustomPythonPackage
1 -
DABs
1 -
Data Engineering
2 -
Data Explorer
1 -
Data Ingestion & connectivity
1 -
DataAISummit2023
1 -
DatabrickHive
1 -
databricks
2 -
Databricks Academy
1 -
Databricks Alerts
1 -
Databricks Audit Logs
1 -
Databricks Certified Associate Developer for Apache Spark
1 -
Databricks Cluster
1 -
Databricks Clusters
1 -
Databricks Community
1 -
Databricks connect
1 -
Databricks Dashboard
1 -
Databricks delta
2 -
Databricks Delta Table
2 -
Databricks Documentation
1 -
Databricks JDBC
1 -
Databricks Job
1 -
Databricks jobs
2 -
Databricks Lakehouse Platform
1 -
Databricks notebook
1 -
Databricks Notebooks
2 -
Databricks Platform
1 -
Databricks Pyspark
1 -
Databricks Python Notebook
1 -
Databricks Repo
1 -
Databricks SQL
1 -
Databricks SQL Alerts
1 -
Databricks SQL Warehouse
1 -
Databricks UI
1 -
Databricks Unity Catalog
3 -
Databricks Workflow
2 -
Databricks Workflows
2 -
Databricks workspace
1 -
DatabricksJobCluster
1 -
DataDays
1 -
Datagrip
1 -
DataMasking
2 -
dbdemos
1 -
DBRuntime
1 -
DDL
1 -
deduplication
1 -
Delt Lake
1 -
Delta
13 -
Delta Live Pipeline
3 -
Delta Live Table
5 -
Delta Live Table Pipeline
5 -
Delta Live Table Pipelines
4 -
Delta Live Tables
6 -
Delta Sharing
2 -
deltaSharing
1 -
denodo
1 -
Deny assignment
1 -
Devops
1 -
DLT
9 -
DLT Pipeline
6 -
DLT Pipelines
5 -
DLTCluster
1 -
Documentation
2 -
Dolly
1 -
Download files
1 -
dropduplicatewithwatermark
1 -
Dynamic Variables
1 -
Engineering With Databricks
1 -
env
1 -
External Sources
1 -
External Storage
2 -
FAQ for Databricks Learning Festival
1 -
Feature Store
2 -
Filenotfoundexception
1 -
Free trial
1 -
GCP Databricks
1 -
Getting started
1 -
glob
1 -
Good Documentation
1 -
Google Bigquery
1 -
hdfs
1 -
Help
1 -
How to study Databricks
1 -
informatica
1 -
Jar
1 -
Java
1 -
JDBC Connector
1 -
Job Cluster
1 -
Job Task
1 -
Kubernetes
1 -
LightGMB
1 -
Lineage
1 -
LLMs
1 -
Login
1 -
Login Account
1 -
Machine Learning
1 -
MachineLearning
1 -
masking
1 -
Materialized Tables
2 -
Medallion Architecture
1 -
Metastore
1 -
MlFlow
2 -
Mlops
1 -
Model Serving
1 -
Model Training
1 -
Mount
1 -
Networking
1 -
nic
1 -
Okta
1 -
ooze
1 -
os
1 -
Password
1 -
Permission
1 -
Permissions
1 -
personalcompute
1 -
Pipeline
2 -
policies
1 -
PostgresSQL
1 -
Pricing
1 -
pubsub
1 -
Pyspark
1 -
Python
2 -
Python Code
1 -
Python Wheel
1 -
Quickstart
1 -
RBAC
1 -
Repos Support
1 -
Reserved VM's
1 -
Reset
1 -
run a job
1 -
runif
1 -
S3
1 -
SAP SUCCESS FACTOR
1 -
Schedule
1 -
SCIM
1 -
Serverless
1 -
Service principal
1 -
Session
1 -
Sign Up Issues
2 -
Significant Performance Difference
1 -
Spark
2 -
sparkui
2 -
Splunk
1 -
sqoop
1 -
Start
1 -
Stateful Stream Processing
1 -
Storage Optimization
1 -
Structured Streaming ForeachBatch
1 -
suggestion
1 -
Summit23
2 -
Support Tickets
1 -
Sydney
2 -
Table Download
1 -
tabrikck
1 -
Tags
1 -
Troubleshooting
1 -
ucx
2 -
Unity Catalog
1 -
Unity Catalog Error
2 -
Unity Catalog Metastore
1 -
UntiyCatalog
1 -
Update
1 -
user groups
1 -
Venicold
3 -
volumes
2 -
Voucher Not Recieved
1 -
Watermark
1 -
Weekly Documentation Update
1 -
with open
1 -
Women
1 -
Workflow
2 -
Workspace
2
- « Previous
- Next »