- 1791 Views
- 6 replies
- 0 kudos
Resolved! File found with %fs ls but not with spark.read
Code: wikipediaDF = (spark.read .option("HEADER", True) .option("inferSchema", True) .csv("/databricks-datasets/wikipedia-datasets/data-001/pageviews/raw/pageviews_by_second.tsv"))display(bostonDF) Error: Failed to store the result. Try rerunning ...
- 1791 Views
- 6 replies
- 0 kudos
- 0 kudos
I have the exact same issue. Seems like limiting the the display() method works as a temporary solution, but I wonder if there's any long term one. The idea would be to have the possibility of displaying larger datasets within a notebook. How to achi...
- 0 kudos
- 454 Views
- 1 replies
- 1 kudos
Resolved! Spreadsheet-Like UI for Databricks
We are currently entering data into Excel and then uploading it into Databricks. Is there a built-in spreadsheet-like UI within Databricks that can update data directly in Databricks?
- 454 Views
- 1 replies
- 1 kudos
- 1 kudos
Hello, @j_h_robinson! Databricks doesn’t have a built-in spreadsheet-like UI for direct data entry or editing. Are you manually uploading the Excel files or using an ODBC driver setup? If you’re doing it manually, you might find this helpful: Connect...
- 1 kudos
- 2244 Views
- 5 replies
- 2 kudos
Resolved! Understanding Autoscaling in Databricks: Under What Conditions Does Spark Add a New Worker Node?
I’m currently working with Databricks autoscaling configurations and trying to better understand how Spark decides when to spin up additional worker nodes. My cluster has a minimum of one worker and can scale up to five. I know that tasks are assigne...
- 2244 Views
- 5 replies
- 2 kudos
- 2 kudos
Hi @h_h_ak ,Short Answer:Autoscaling primarily depends on the number of pending tasks.Workspaces on the Premium plan use optimized autoscaling, while those on the Standard plan use standard autoscaling.Long Answer:Databricks autoscaling responds main...
- 2 kudos
- 963 Views
- 4 replies
- 4 kudos
Why is the ipynb format recommended?
In this document, https://docs.databricks.com/aws/en/notebooks/notebook-format,Jupyter (.ipynb) format is recommended.> Select File from the workspace menu, select Notebook format, and choose the format you want. You can choose either Jupyter (.ipynb...
- 963 Views
- 4 replies
- 4 kudos
- 4 kudos
Hi @Yuki,One other risk that we foresee / encountered recently is how the notebooks will look in your pull requests of external repos (Azure Devops or GitHub). It will be very hard for a pull request reviewer to understand on the code / notebook read...
- 4 kudos
- 2718 Views
- 1 replies
- 0 kudos
Find value in any column in a table
Hi,I'm not sure if this is a possible scenario, but is there, by any chance a way to query all the columns of a table for searching a value? Explanation: I want to search for a specific value in all the columns of a databricks table. I don't know whi...
- 2718 Views
- 1 replies
- 0 kudos
- 0 kudos
I also have this same requirement now and cant find the solution for this yet. Any help would be good. thanks
- 0 kudos
- 1072 Views
- 4 replies
- 0 kudos
Databricks apps - Volumes and Workspace - FileNotFound issues
I have a Databricks App I need to integrate with volumes using local python os functions. I've setup a simple test: def __init__(self, config: ObjectStoreConfig): self.config = config # Ensure our required paths are created ...
- 1072 Views
- 4 replies
- 0 kudos
- 0 kudos
If you use the databricks python sdk you can access volume files using built-in app credentials. All you need to do is instantiate the workspace client from the sdk and you can use its methods to operate on volumes.
- 0 kudos
- 899 Views
- 2 replies
- 0 kudos
Solace to Azure Data Lake Storage
Hi Team,What is the most effective method for performing data ingestion from Solace to Azure Data Lake Storage (ADLS) utilizing an Azure Databricks notebook? Any recommendations would be greatly appreciated.Regards,Phani
- 899 Views
- 2 replies
- 0 kudos
- 0 kudos
Here is the sample script to invoke the connectorval struct_stream = spark.readStream.format("solace").option("host", "").option("vpn", "").option("username", "").option("password", "").option("queue", "").option("connectRetries", 3).option("reconnec...
- 0 kudos
- 6680 Views
- 20 replies
- 4 kudos
Mounting Data IOException
Hello,I am currently taking a course from Coursera for data science using SQL. For one of our assignments we need to mount some data by running a script that has been provided to us by the class. When I run the script I receive the following error. I...
- 6680 Views
- 20 replies
- 4 kudos
- 4 kudos
Hello all, we came up with a solution: to download the data directly instead of mounting it. The community version is limited, and we don't have access to S3 unless we create our own aws account, load the data there, and then mount our account on dat...
- 4 kudos
- 3115 Views
- 6 replies
- 3 kudos
Unable to install a wheel file which is in my volume to a serverless cluster
I am trying to install a wheel file which is in my volume to a serverless cluster, getting the below error@ken@Retired_mod Note: you may need to restart the kernel using %restart_python or dbutils.library.restartPython() to use updated packages. WARN...
- 3115 Views
- 6 replies
- 3 kudos
- 731 Views
- 4 replies
- 1 kudos
Resolved! Agents and Inference table errors
Hi, I'm trying to deploy a rag model from GCP databricks. I've added an external gpt4o endpoint and enabled inference table in settings. But when Im trying to deploy agents I'm still getting the inference table not enabled error. (I've registered the...
- 731 Views
- 4 replies
- 1 kudos
- 1 kudos
The Model Serving is supported in your region so it can be another problem or limitation.
- 1 kudos
- 632 Views
- 2 replies
- 0 kudos
Resolved! DatabricksWorkflowTaskGroup
Hello,I recently learned about the DatabricksWorkflowTaskGroup operator for Airflow that allows one to run multiple Notebook tasks on a shared job compute cluster from Airflow.Is a similar feature possible to run multiple non-Notebook tasks from Airf...
- 632 Views
- 2 replies
- 0 kudos
- 388 Views
- 2 replies
- 0 kudos
Databricks tasks are not skipping if running tasks using Airflow DatabricksworkflowTaskgroup
Currently we are facing a challenge with below use case:The Airflow DAG has 4 tasks (Task1, Task2, Task3 and Task4) and The dependency is like thisTask 1>> Task2 >> Task3 >> Task4 (All tasks are spark-jar task typesIn Airflow DAG for Task2, there is ...
- 388 Views
- 2 replies
- 0 kudos
- 0 kudos
Hi @anil_reddaboina, Databricks allows you to add control flow logic to tasks based on the success, failure, or completion of their dependencies. This can be achieved using the "Run if" dependencies fiel: https://docs.databricks.com/aws/en/jobs/run-i...
- 0 kudos
- 2046 Views
- 3 replies
- 1 kudos
Databricks JDBC driver multi query in one request.
Can I run multi query in one command using databricks JDBC driver and would databricks execute one query faster then running multi queries in one script?
- 2046 Views
- 3 replies
- 1 kudos
- 1 kudos
Yes, you can run multiple queries in one command using the Databricks JDBC driver.The results will be displayed in separate tables. When you run the multiple queries, they are all still individual queries. Running multiple queries in a script will no...
- 1 kudos
- 432 Views
- 3 replies
- 0 kudos
Are row filters and column masks supported on foreign catalogs in Azure Databricks Unity Catalog?
In my solution I am planning to bring in an Azure SQL Database to Azure Databricks Unity Catalog as Foreign Catalog. Are table row filters and column masks supported in my scenario ?
- 432 Views
- 3 replies
- 0 kudos
- 0 kudos
Hi @Arindam19, Yes. Certain operations, including filtering, can be pushed down from Databricks to SQL Server. This is managed by querying the SQL Server directly via a federated connection, allowing SQL Server to handle the filter criteria and retur...
- 0 kudos
- 1095 Views
- 7 replies
- 0 kudos
Resolved! Programatic selection of serverless compute for notebooks environment version
Hello,I have a case where I am executing notebooks from an external system using databricks api /api/2.2/jobs/runs/submit. This has always been non problematic with the job compute, but due to the quite recent serverless for notebooks support being i...
- 1095 Views
- 7 replies
- 0 kudos
- 0 kudos
As an alternative environment for Serverless could be set in asset bundle job configuration.https://learn.microsoft.com/en-us/azure/databricks/dev-tools/bundles/jobs-tutorial#configure-a-job-that-uses-serverless-compute
- 0 kudos
Join Us as a Local Community Builder!
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now-
Access Data
2 -
Access Delta Tables
2 -
Account reset
1 -
ADF Pipeline
1 -
ADLS Gen2 With ABFSS
1 -
Analytics
1 -
Apache spark
1 -
API Documentation
3 -
Architecture
1 -
Auto-loader
1 -
Autoloader
2 -
AWS
3 -
AWS security token
1 -
AWSDatabricksCluster
1 -
Azure
4 -
Azure data disk
1 -
Azure databricks
13 -
Azure Databricks SQL
5 -
Azure databricks workspace
1 -
Azure Unity Catalog
5 -
Azure-databricks
1 -
AzureDatabricks
1 -
AzureDevopsRepo
1 -
Big Data Solutions
1 -
Billing
1 -
Billing and Cost Management
1 -
Bronze Layer
1 -
Certification
3 -
Certification Exam
1 -
Certification Voucher
3 -
Cloud_files_state
1 -
CloudFiles
1 -
Cluster
3 -
Community Edition
3 -
Community Group
1 -
Community Members
1 -
Compute
3 -
conditional tasks
1 -
Connection
1 -
Contest
1 -
Credentials
1 -
CustomLibrary
1 -
Data + AI Summit
1 -
Data Engineering
3 -
Data Explorer
1 -
Data Ingestion & connectivity
1 -
databricks
2 -
Databricks Academy
1 -
Databricks AI + Data Summit
1 -
Databricks Alerts
1 -
Databricks Assistant
1 -
Databricks Certification
1 -
Databricks Cluster
2 -
Databricks Clusters
1 -
Databricks Community
9 -
Databricks community edition
3 -
Databricks Community Rewards Store
3 -
Databricks connect
1 -
Databricks Dashboard
1 -
Databricks delta
2 -
Databricks Delta Table
2 -
Databricks Demo Center
1 -
Databricks Documentation
2 -
Databricks JDBC Driver
1 -
Databricks Job
1 -
Databricks Lakehouse Platform
6 -
Databricks notebook
2 -
Databricks Notebooks
2 -
Databricks Platform
2 -
Databricks Pyspark
1 -
Databricks Python Notebook
1 -
Databricks Repo
1 -
Databricks Runtime
1 -
Databricks SQL
5 -
Databricks SQL Alerts
1 -
Databricks SQL Warehouse
1 -
Databricks UI
1 -
Databricks Unity Catalog
4 -
Databricks Workflow
2 -
Databricks Workflows
2 -
Databricks workspace
1 -
Databricks-connect
1 -
DatabricksJobCluster
1 -
DataDays
1 -
Datagrip
1 -
DataMasking
2 -
dbdemos
2 -
DBFS
1 -
DBRuntime
1 -
DDL
1 -
Dear Community
1 -
deduplication
1 -
Delt Lake
1 -
Delta
22 -
Delta Live Pipeline
3 -
Delta Live Table
5 -
Delta Live Table Pipeline
5 -
Delta Live Table Pipelines
4 -
Delta Live Tables
7 -
Delta Sharing
2 -
deltaSharing
1 -
Deny assignment
1 -
Development
1 -
Devops
1 -
DLT
10 -
DLT Pipeline
7 -
DLT Pipelines
5 -
Dolly
1 -
Download files
1 -
Dynamic Variables
1 -
Engineering With Databricks
1 -
env
1 -
External Sources
1 -
External Storage
2 -
FAQ for Databricks Learning Festival
2 -
Feature Store
2 -
Filenotfoundexception
1 -
Free trial
1 -
GCP Databricks
1 -
Getting started
1 -
Google Bigquery
1 -
HIPAA
1 -
Integration
1 -
JDBC Connections
1 -
JDBC Connector
1 -
Job Task
1 -
Lineage
1 -
LLM
1 -
Login
1 -
Login Account
1 -
Machine Learning
2 -
MachineLearning
1 -
Materialized Tables
2 -
Medallion Architecture
1 -
MlFlow
2 -
Model Training
1 -
Networking
1 -
Notebook
1 -
Onboarding Trainings
1 -
Pandas udf
1 -
Permissions
1 -
personalcompute
1 -
Pipeline
2 -
Plotly
1 -
PostgresSQL
1 -
Pricing
1 -
Pyspark
1 -
Python
4 -
Python Code
1 -
Python Wheel
1 -
Quickstart
1 -
Read data
1 -
Repos Support
1 -
Reset
1 -
Rewards Store
2 -
Schedule
1 -
Serverless
3 -
Session
1 -
Sign Up Issues
2 -
Spark
3 -
Spark Connect
1 -
sparkui
2 -
Splunk
1 -
SQL
8 -
Summit23
7 -
Support Tickets
1 -
Sydney
2 -
Table Download
1 -
Tags
1 -
Training
2 -
Troubleshooting
1 -
Unity Catalog
4 -
Unity Catalog Metastore
2 -
Update
1 -
user groups
1 -
Venicold
3 -
Voucher Not Recieved
1 -
Watermark
1 -
Weekly Documentation Update
1 -
Weekly Release Notes
2 -
Women
1 -
Workflow
2 -
Workspace
3
- « Previous
- Next »
User | Count |
---|---|
122 | |
56 | |
40 | |
30 | |
20 |