- 3435 Views
- 1 replies
- 0 kudos
Tracking File Arrivals in Nested Folders Using Databricks File Arrival Trigger
Hi Team,I'm currently exploring a file arrival trigger with Data-bricks, but my data is organized into nested folders representing various sources. For instance: source1 |-- file1 |-- file.csv |-- file2 |-- file.csv My goal is to dete...
- 3435 Views
- 1 replies
- 0 kudos
- 0 kudos
@Retired_mod did a LLM Bot write the above response for you? You link to a Stackoverflow post which uses Azure Data Factory, and you text contains concepts which do not apply to Databricks ("Use a lookup activity or a Get Metadata Activity to fetch t...
- 0 kudos
- 1583 Views
- 0 replies
- 0 kudos
Create Databricks model serving endpoint in Azure DevOps yaml
Hello,I need to create and destroy a model endpoint as part of CI/CD. I tried with mlflow deployments create-endpoint, giving databricks as --target however it errors saying that --endpoint is not a known argument when clearly --endpoint is required....
- 1583 Views
- 0 replies
- 0 kudos
- 5453 Views
- 3 replies
- 0 kudos
Notebook Dashboard to html to pdf issues
I have created a dashboard using the notebook dashboard interface, rather than the SQL warehouse dashboards. This means that the tables and visualizations on the dashboard, as well as the dashboard itself, are directly tied to a notebook and the outp...
- 5453 Views
- 3 replies
- 0 kudos
- 0 kudos
@dataVaughan - you can use the Lakeview dashboard which can provide an URL that is shareable outside of the Databricks workspace. https://www.databricks.com/blog/announcing-public-preview-lakeview-dashboards In your current scenario, you can clone ...
- 0 kudos
- 3452 Views
- 1 replies
- 0 kudos
Resolved! fetching metadata for tables in a database stored in unity catalogue
Hi everyoneiam trying to fetch the metadata of every columns from an table and every tables from the database under an catalogue for that iam trying to use the samples catalogue that provided by databricks and get details for tpch database that provi...
- 3452 Views
- 1 replies
- 0 kudos
- 0 kudos
@sai_sathya - you can use DESCRIBE EXTENDED command to get the metadata of the given table. Also, you can query the information_schema.columns within your UC catalog to check the column details of a given table.
- 0 kudos
- 8014 Views
- 5 replies
- 1 kudos
Resolved! Can't run .py file using workflows anymore
Dear all,Greetings!I have been trying to run a workflow job which runs successfully when a task is created using a Notebook file from a folder present in the Workspace but when the same task's type is changed to python script and a .py file is select...
- 8014 Views
- 5 replies
- 1 kudos
- 1 kudos
Hi,Have found the solution. It was due to following option being enabled under the Feature Enablement tab under Databricks_Account_Console -- > Settings. Thank you for all your help and the try!Regards,Uday
- 1 kudos
- 2324 Views
- 4 replies
- 0 kudos
Parameterizing DLT Pipelines
Hi Everyone,I have DLTP pipeline which I need to execute for difference source systems. Need advise on how to parametrize this.I have gone through many articles on the web, but it seems there is no accurate information available.Can anyone please hel...
- 2324 Views
- 4 replies
- 0 kudos
- 0 kudos
Thank you @AmanSehgal ,I have done that and was able to execute the pipeline successfully. Bu t I need to change the parameter value at run time, so that the same pipeline can be used for multiple sources.Can we pass parameters from Job to DLT Pipeli...
- 0 kudos
- 5024 Views
- 3 replies
- 0 kudos
Cant create cluster: "Aws Authorization Failure:" .. not authorized to perform: sts:AssumeRole
Full error here:Aws Authorization Failure:Failure happened when talking to AWS, AWS API error code: AccessDenied AWS error message: User: arn:aws:iam::414351767826:user/ConsolidatedManagerIAMUser-ConsolidatedManagerUser-VX02FYW0SSCY is not authorized...
- 5024 Views
- 3 replies
- 0 kudos
- 2448 Views
- 2 replies
- 0 kudos
Resolved! Databrciks: failure logs
Hello Team,I am new to Databrciks. Generally where all the logs will be stored in Databricks. I see if any job fails below the command i could see some error messages.Otherwise in real time how to check the log files/error messages in Databricks UI.T...
- 2448 Views
- 2 replies
- 0 kudos
- 2809 Views
- 2 replies
- 1 kudos
Resolved! Cannot create delta location with mount path
Hi all,I'm trying to create a Table but cannot use a predifined mount path like '/mnt/silver/' but if i use a full path of azure blob container it will create susscessfully like this:`CREATE TABLE IF NOT EXISTS nhan_databricks.f1_processed.circuits (...
- 2809 Views
- 2 replies
- 1 kudos
- 1 kudos
Oh thanks for you answer, actually I'm using Unity Catalog
- 1 kudos
- 1682 Views
- 1 replies
- 0 kudos
Can't Create a Workspace using Google Cloud
Trying to create my first workspace. I hit create my space and I see 3 buckets being created on my GCP, but nothing shows up in the actual 'workspaces' in my databricks console. the only thing is the 'create workspace' button' also, there is no erro...
- 1682 Views
- 1 replies
- 0 kudos
- 4711 Views
- 2 replies
- 0 kudos
Getting client.session.cache.size warning in pyspark code using databricks connect
Hi Community, I have setup a jupyter notebook in a server and installed databricks connect in its kernel to leverage my databricks cluster compute in the notebook and write pyspark code. Whenever I run my code it gives me below warning: ```WARN Spark...
- 4711 Views
- 2 replies
- 0 kudos
- 1562 Views
- 1 replies
- 0 kudos
Ingesting Non-Incremental Data into Delta
Hello,I have non-incremental data landing in a storage account. This data contains old data from before as well as new data. I would like to avoid doing a complete table deletion and table creation just to upload the data from storage and have an upd...
- 1562 Views
- 1 replies
- 0 kudos
- 0 kudos
Well, if you know the conditions to separate new data from old data, then while reading the data in to your dataframe, use filter or where clause to select new data and ingest it in to your delta table.This is how you can do in general. But if you ha...
- 0 kudos
- 1487 Views
- 1 replies
- 0 kudos
log delivery are not creating data in s3 bucket
Hiii, Does anyone have an idea about the typical duration for Databricks to create logs in an S3 bucket using the databricks_mws_log_delivery Terraform resource? I've implemented the code provided in the Databricks official documentation, but I've be...
- 1487 Views
- 1 replies
- 0 kudos
- 0 kudos
The issue has been resolved. There was no problem with the code or the API. However, it took over 12 hours for logs to start appearing in my bucket, despite Databricks documentation indicating that logs should appear within 1 hour..Thank you!
- 0 kudos
- 13981 Views
- 3 replies
- 1 kudos
Is there a (request-) size limit for the Databricks Rest Api Sql statements?
When inserting rows through the Sql Api (/api/2.0/sql/statements/), when more than a certain number of records (about 25 records with 8 small columns) are included in the statement, the call fails with the error:"The request could not be processed by...
- 13981 Views
- 3 replies
- 1 kudos
- 1 kudos
@TheIceBrick did you find out anything else about this?I am experiencing exactly the same, I can insert up to 35 rows but break at about 50 rows.The payload size is 42KB, I am passing parameters for each row.@Debayan This is no where near the 16MiB /...
- 1 kudos
- 12136 Views
- 2 replies
- 0 kudos
Long running jobs get lost
Hello,I tried to schedule a long running job and surprisingly it does seem to neither terminate (and thus does not let the cluster shut down), nor continue running, even though the state is still "Running":But the truth is that the job has miserably ...
- 12136 Views
- 2 replies
- 0 kudos
- 0 kudos
Have you looked at the sql plan to see what the spark job 72 was doing?
- 0 kudos
Join Us as a Local Community Builder!
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now-
.CSV
1 -
Access Data
2 -
Access Databricks
1 -
Access Delta Tables
2 -
Account reset
1 -
ADF Pipeline
1 -
ADLS Gen2 With ABFSS
1 -
Advanced Data Engineering
1 -
AI
1 -
Analytics
1 -
Apache spark
1 -
Apache Spark 3.0
1 -
API Documentation
3 -
Architecture
1 -
asset bundle
1 -
Asset Bundles
2 -
Auto-loader
1 -
Autoloader
4 -
AWS
3 -
AWS security token
1 -
AWSDatabricksCluster
1 -
Azure
5 -
Azure data disk
1 -
Azure databricks
14 -
Azure Databricks SQL
6 -
Azure databricks workspace
1 -
Azure Delta Lake
1 -
Azure Unity Catalog
5 -
Azure-databricks
1 -
AzureDatabricks
1 -
AzureDevopsRepo
1 -
Big Data Solutions
1 -
Billing
1 -
Billing and Cost Management
1 -
Blackduck
1 -
Bronze Layer
1 -
Certification
3 -
Certification Exam
1 -
Certification Voucher
3 -
CICDForDatabricksWorkflows
1 -
Cloud_files_state
1 -
CloudFiles
1 -
Cluster
3 -
Community Edition
3 -
Community Event
1 -
Community Group
1 -
Community Members
1 -
Compute
3 -
Compute Instances
1 -
conditional tasks
1 -
Connection
1 -
Contest
1 -
Credentials
1 -
CustomLibrary
1 -
Data
1 -
Data + AI Summit
1 -
Data Engineering
3 -
Data Explorer
1 -
Data Ingestion & connectivity
1 -
Databrick add-on for Splunk
1 -
databricks
2 -
Databricks Academy
1 -
Databricks AI + Data Summit
1 -
Databricks Alerts
1 -
Databricks Assistant
1 -
Databricks Certification
1 -
Databricks Cluster
2 -
Databricks Clusters
1 -
Databricks Community
10 -
Databricks community edition
3 -
Databricks Community Edition Account
1 -
Databricks Community Rewards Store
3 -
Databricks connect
1 -
Databricks Dashboard
2 -
Databricks delta
2 -
Databricks Delta Table
2 -
Databricks Demo Center
1 -
Databricks Documentation
2 -
Databricks genAI associate
1 -
Databricks JDBC Driver
1 -
Databricks Job
1 -
Databricks Lakehouse Platform
6 -
Databricks Migration
1 -
Databricks notebook
2 -
Databricks Notebooks
3 -
Databricks Platform
2 -
Databricks Pyspark
1 -
Databricks Python Notebook
1 -
Databricks Repo
1 -
Databricks Runtime
1 -
Databricks SQL
5 -
Databricks SQL Alerts
1 -
Databricks SQL Warehouse
1 -
Databricks Terraform
1 -
Databricks UI
1 -
Databricks Unity Catalog
4 -
Databricks Workflow
2 -
Databricks Workflows
2 -
Databricks workspace
2 -
Databricks-connect
1 -
DatabricksJobCluster
1 -
DataDays
1 -
Datagrip
1 -
DataMasking
2 -
DataVersioning
1 -
dbdemos
2 -
DBFS
1 -
DBRuntime
1 -
DBSQL
1 -
DDL
1 -
Dear Community
1 -
deduplication
1 -
Delt Lake
1 -
Delta
22 -
Delta Live Pipeline
3 -
Delta Live Table
5 -
Delta Live Table Pipeline
5 -
Delta Live Table Pipelines
4 -
Delta Live Tables
7 -
Delta Sharing
2 -
deltaSharing
1 -
Deny assignment
1 -
Development
1 -
Devops
1 -
DLT
10 -
DLT Pipeline
7 -
DLT Pipelines
5 -
Dolly
1 -
Download files
1 -
Dynamic Variables
1 -
Engineering With Databricks
1 -
env
1 -
ETL Pipelines
1 -
External Sources
1 -
External Storage
2 -
FAQ for Databricks Learning Festival
2 -
Feature Store
2 -
Filenotfoundexception
1 -
Free trial
1 -
GCP Databricks
1 -
GenAI
1 -
Getting started
2 -
Google Bigquery
1 -
HIPAA
1 -
import
1 -
Integration
1 -
JDBC Connections
1 -
JDBC Connector
1 -
Job Task
1 -
Lineage
1 -
LLM
1 -
Login
1 -
Login Account
1 -
Machine Learning
2 -
MachineLearning
1 -
Materialized Tables
2 -
Medallion Architecture
1 -
Migration
1 -
ML Model
1 -
MlFlow
2 -
Model Training
1 -
Module
1 -
Networking
1 -
Notebook
1 -
Onboarding Trainings
1 -
Pandas udf
1 -
Permissions
1 -
personalcompute
1 -
Pipeline
2 -
Plotly
1 -
PostgresSQL
1 -
Pricing
1 -
Pyspark
1 -
Python
5 -
Python Code
1 -
Python Wheel
1 -
Quickstart
1 -
Read data
1 -
Repos Support
1 -
Reset
1 -
Rewards Store
2 -
Schedule
1 -
Serverless
3 -
Session
1 -
Sign Up Issues
2 -
Spark
3 -
Spark Connect
1 -
sparkui
2 -
Splunk
2 -
SQL
8 -
Summit23
7 -
Support Tickets
1 -
Sydney
2 -
Table Download
1 -
Tags
1 -
Training
2 -
Troubleshooting
1 -
Unity Catalog
4 -
Unity Catalog Metastore
2 -
Update
1 -
user groups
1 -
Venicold
3 -
Voucher Not Recieved
1 -
Watermark
1 -
Weekly Documentation Update
1 -
Weekly Release Notes
2 -
Women
1 -
Workflow
2 -
Workspace
3
- « Previous
- Next »
User | Count |
---|---|
133 | |
90 | |
42 | |
42 | |
30 |