- 2833 Views
- 3 replies
- 0 kudos
Cannot %run notebook using relative path
In the root of my workspace, I have the following folder structure set up:/foo / ETL / setup / utilitiesFrom a notebook in /foo/ETL or /foo/setup, if I try to %run a notebook in /foo/utilities by giving the relative path ../utilities/<x>, I get...
- 2833 Views
- 3 replies
- 0 kudos
- 0 kudos
Hello @DManowitz-BAH ,Hope you are doing great, I tried to replicate this behavior on 12.2 LTS runtime and didn't get any errors for the same folder/workspace structure..Can you share the cluster configuration details so I can mimic exactly the same ...
- 0 kudos
- 1408 Views
- 1 replies
- 1 kudos
Delta Live Tables: BAD_REQUEST: Pipeline cluster is not reachable.
Hello community: I don´t know why my Delta Live table workflow fails into this step. This is the configuration I have for the pipeline:{"id": "**","pipeline_type": "WORKSPACE","clusters": [{"label": "default","spark_conf": {***},"num_workers": 3},{"l...
- 1408 Views
- 1 replies
- 1 kudos
- 1 kudos
- 1 kudos
- 2385 Views
- 4 replies
- 3 kudos
Databricks repos adds white spaces to notebook.
Hi,Databricks repos is making white spaces in the notebooks, without us opening the notebook. So seems to be automatically. Does someone has a solution how we can turn this off?An example:Can someone helps us with this?
- 2385 Views
- 4 replies
- 3 kudos
- 3 kudos
This bug is causing problems in our CI/CD pipeline too. I've had to manually clear things to keep it running as we get merge conflicts when no changes are made.
- 3 kudos
- 4453 Views
- 1 replies
- 0 kudos
Pyspark cast error
Hi All,hive> create table UK ( a decimal(10,2)) ;hive> create table IN ( a decimal(10,5)) ;hive> create view T as select a from UK union all select a from IN ;above all statements executes successfully in Hive and return results when select statement...
- 4453 Views
- 1 replies
- 0 kudos
- 0 kudos
- 0 kudos
- 5645 Views
- 0 replies
- 0 kudos
databricks assistant rocks
Databricks Assistant Rocks!
- 5645 Views
- 0 replies
- 0 kudos
- 4718 Views
- 0 replies
- 0 kudos
Connecting to hive metastore as well as glue catalog at the sametime
Hi,Is there any way we can connect glue catalog as well as to hive metastore in the same warehouse?I can create a single instance profile and provide all the required access for buckets or for glue catalog.I tried with the below configuration,spark.s...
- 4718 Views
- 0 replies
- 0 kudos
- 1137 Views
- 0 replies
- 0 kudos
Optuna results change when rerun on Databricks
The best trial results seem to change every time the same study is rerun. On Microsoft Azure this can be fixed by setting the sampler seed. However this solution doesn't seem to work on Databricks. Does anyone know why that is the case and how to mak...
- 1137 Views
- 0 replies
- 0 kudos
- 561 Views
- 0 replies
- 0 kudos
Structured Streaming from TimescaleDB
I realize that the best practice would be to integrate our service with Kafka as a streaming source for Databricks, but given that the service already stores data into TimescaleDB, how can I stream data from TimescaleDB into DBX? Debezium doesn't wor...
- 561 Views
- 0 replies
- 0 kudos
- 1772 Views
- 1 replies
- 0 kudos
No results were found: the query results may no longer be available or you may not have permissions
When anyone (admins included) click on an alert task in a job run we see the error `No results were found: the query results may no longer be available or you may not have permissions`.Should we be seeing something else or is this a matter of a poor ...
- 1772 Views
- 1 replies
- 0 kudos
- 0 kudos
The error message "No results were found: the query results may no longer be available or you may not have permissions" is designed to address a range of potential situations. This includes instances where data might not be accessible due to reasons ...
- 0 kudos
- 7864 Views
- 1 replies
- 1 kudos
How to properly import spark functions?
I have the following command that runs in my databricks notebook.spark.conf.get("spark.databricks.clusterUsageTags.managedResourceGroup")I have wrapped this command into a function (simplified).def get_info(): return spark.conf.get("spark.databri...
- 7864 Views
- 1 replies
- 1 kudos
- 1 kudos
- 1 kudos
- 812 Views
- 1 replies
- 0 kudos
Cannot create an account to try Community Edition
Hi,Whenever I try to signup for an account, I keep getting the following message in the first step - "an error has occurred. please try again later" Could you please let me know why this could be? I tried multiple emails and seems to be having same i...
- 812 Views
- 1 replies
- 0 kudos
- 964 Views
- 0 replies
- 0 kudos
Struct type limitation: possible hidden limit for parquet tables
Recently I discovered an issue when creating a PARQUET table that contains a column of type STRUCT with more than 350 string subfields. Such a table can be successfully created via a standard DDL script nevertheless each subsequent attempt to work wi...
- 964 Views
- 0 replies
- 0 kudos
- 5178 Views
- 1 replies
- 0 kudos
Resolved! How do you properly read database-files (.db) with Spark in Python after the JDBC update?
I have a set of database-files (.db) which I need to read into my Python Notebook in Databricks. I managed to do this fairly simple up until July when a update in SQLite JDBC library was introduced. Up until now I have read the files in question with...
- 5178 Views
- 1 replies
- 0 kudos
- 0 kudos
When the numbers in the table are really big (millions and billions) or really low (e.g. 1e-15), SQLite JDBC may struggle to import the correct values. To combat this, a good idea could be to use customSchema in options to define the schema using Dec...
- 0 kudos
- 4018 Views
- 0 replies
- 0 kudos
Cron Schedule like 0 58/30 6,7,8,9,10,11,12,13,14,15,16,17 ? * MON,TUE,WED,THU,FRI * does not work
when we use this cron schedule: 0 58/30 6,7,8,9,10,11,12,13,14,15,16,17 ? * MON,TUE,WED,THU,FRI *so far only the 58th minute will run, but not the 28th minute (30minutes after 58th minute). Is there some kind of bug in the cron scheduler?Reference: h...
- 4018 Views
- 0 replies
- 0 kudos
- 2751 Views
- 5 replies
- 1 kudos
Resolved! Databricks Add-on for Splunk v1.2 - Error in 'databricksquery' command
Is anyone else using the new v1.2 of the Databricks Add-on for Splunk ? We upgraded to 1.2 and now get this error for all queries.Running process: /opt/splunk/bin/nsjail-wrapper /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-Databricks/bin/datab...
- 2751 Views
- 5 replies
- 1 kudos
- 1 kudos
There is a new mandatory parameter for databricksquery called account_name. This breaking change is not documented in Splunkbase release notes but it does appear in the docs within the Splunk app. databricksquery cluster="<cluster_name>" query="<S...
- 1 kudos
Connect with Databricks Users in Your Area
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group-
AI Summit
4 -
Azure
2 -
Azure databricks
2 -
Bi
1 -
Certification
1 -
Certification Voucher
2 -
Community
7 -
Community Edition
3 -
Community Members
1 -
Community Social
1 -
Contest
1 -
Data + AI Summit
1 -
Data Engineering
1 -
Databricks Certification
1 -
Databricks Cluster
1 -
Databricks Community
8 -
Databricks community edition
3 -
Databricks Community Rewards Store
3 -
Databricks Lakehouse Platform
5 -
Databricks notebook
1 -
Databricks Office Hours
1 -
Databricks Runtime
1 -
Databricks SQL
4 -
Databricks-connect
1 -
DBFS
1 -
Dear Community
1 -
Delta
9 -
Delta Live Tables
1 -
Documentation
1 -
Exam
1 -
Featured Member Interview
1 -
HIPAA
1 -
Integration
1 -
LLM
1 -
Machine Learning
1 -
Notebook
1 -
Onboarding Trainings
1 -
Python
2 -
Rest API
10 -
Rewards Store
2 -
Serverless
1 -
Social Group
1 -
Spark
1 -
SQL
8 -
Summit22
1 -
Summit23
5 -
Training
1 -
Unity Catalog
3 -
Version
1 -
VOUCHER
1 -
WAVICLE
1 -
Weekly Release Notes
2 -
weeklyreleasenotesrecap
2 -
Workspace
1
- « Previous
- Next »