cancel
Showing results for 
Search instead for 
Did you mean: 
Community Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

satniks
by New Contributor
  • 58 Views
  • 1 replies
  • 0 kudos

PERSIST TABLE is not supported on serverless compute. Any workaround?

I tried the serverless compute cluster and executed my workload, but it gave following error: [NOT_SUPPORTED_WITH_SERVERLESS] PERSIST TABLE is not supported on serverless compute. SQLSTATE: 0A000Can databricks team provide more information on this li...

  • 58 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @satniks, The error message you encountered—“[NOT_SUPPORTED_WITH_SERVERLESS] PERSIST TABLE is not supported on serverless compute. SQLSTATE: 0A000”—indicates that the PERSIST TABLE operation is not allowed in a serverless compute environment. U...

  • 0 kudos
Jaron
by New Contributor II
  • 118 Views
  • 2 replies
  • 0 kudos

Where the "Driver logs" are stored by default, and How much default space to store it.

Recently, when I was using databricks for deep learning I ran into an issue, i.e., after a certain amount of time of execution the cluster would break and restart. The logs are as below:The Event logs displayed "Metastore is down; Driver is up but is...

  • 118 Views
  • 2 replies
  • 0 kudos
Latest Reply
Walter_C
Valued Contributor III
  • 0 kudos

The error message "echo: write error: no space left on device" indicates that the storage space for the driver logs might be full. The default storage location for driver logs in Databricks is on the local disk of the driver node. However, the exact ...

  • 0 kudos
1 More Replies
Ramakrishnan83
by New Contributor III
  • 5081 Views
  • 7 replies
  • 0 kudos

Renaming the database Name in Databricks

Team,Initially our team created the databases with the environment name appended. Ex: cust_dev, cust_qa, cust_prod.I am looking to standardize the database name as consistent name across environments. I want to rename to "cust". All of my tables are ...

  • 5081 Views
  • 7 replies
  • 0 kudos
Latest Reply
Avvar2022
Contributor
  • 0 kudos

You can also use “CASCADE” to drop schema and tables as well. It is recursive. 

  • 0 kudos
6 More Replies
df_Jreco
by New Contributor
  • 81 Views
  • 1 replies
  • 0 kudos

Custom python package iin Notebook task using bundle

Hi mates!I'n my company, we are moving our pipelines to Databricks bundles, our pipelines use a notebook that receives some parameters.This notebook uses a custom python package to apply the business logic based on the parameters that receive.The thi...

Community Discussions
databricks-bundles
  • 81 Views
  • 1 replies
  • 0 kudos
Latest Reply
df_Jreco
New Contributor
  • 0 kudos

 Solved understanding the databricks.yml configuration!

  • 0 kudos
trimethylpurine
by New Contributor
  • 149 Views
  • 2 replies
  • 0 kudos

Resolved! Gathering Data Off Of A PDF File

Hello everyone,I am developing an application that accepts pdf files and inserts the data into my database. The company in question that distributes this data to us only offers PDF files, which you can see attached below (I hid personal info for priv...

  • 149 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @trimethylpurine, Extracting data from PDF files and inserting it into a database is a common task. Here are a few options you can explore: Docparser: This tool allows you to convert PDF data into usable formats for databases like MySQL, Post...

  • 0 kudos
1 More Replies
pranay
by New Contributor
  • 77 Views
  • 1 replies
  • 0 kudos

Data Ingestion into DLT from Azure Event hub batch processing

I am building my first DLT pipeline and I want to ingest data from Azure event hub for batch processing.But, I can just see documentation for streaming by using kafka.Can we do batch processing with DLT & Azure Event hub?

  • 77 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @pranay,  Since Databricks Notebooks allow you to run Python code, you can leverage Python libraries to manipulate Excel files.Instead of using pywin32, consider using libraries like pandas or openpyxl to read, modify, and save Excel files.You can...

  • 0 kudos
Awoke101
by New Contributor II
  • 77 Views
  • 1 replies
  • 0 kudos

UC_COMMAND_NOT_SUPPORTED.WITHOUT_RECOMMENDATION in shared access mode?

I'm using a shared access cluster and am getting this error while trying to upload to Qdrant.  #embeddings_df = embeddings_df.limit(5) options = { "qdrant_url": QDRANT_GRPC_URL, "api_key": QDRANT_API_KEY, "collection_name": QDRANT_COLLEC...

Community Discussions
qdrant
shared_acess
UC_COMMAND_NOT_SUPPORTED.WITHOUT_RECOMMENDATION
  • 77 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @Awoke101,  Shared access clusters in Databricks have certain restrictions due to Unity Catalog limitations.I recommend trying the single-user Unity Catalog cluster or configuring data persistence outside the container. Let me know if you need fur...

  • 0 kudos
kellenpink
by New Contributor
  • 56 Views
  • 1 replies
  • 0 kudos

How does Datbricks differ from snowflake with respect to AI tooling

How does Datbricks differ from snowflake with respect to AI tooling

  • 56 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @kellenpink, Databricks and Snowflake are both powerful platforms, but they excel in different areas:   Databricks shines in big data processing, machine learning, and AI workloads. It provides an integrated environment for data enginee...

  • 0 kudos
kellenpink
by New Contributor
  • 51 Views
  • 1 replies
  • 0 kudos

Hi, how can one go about assessing the value created due to the implementation of DBRX?

Hi, how can one go about assessing the value created due to the implementation of DBRX?

  • 51 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @kellenpink, Assessing the value created by implementing Databricks (DBRX) involves several considerations. Here are some steps you can take: Business Objectives and KPIs: Start by defining clear business objectives for using Databricks. What ...

  • 0 kudos
esi
by New Contributor
  • 1951 Views
  • 2 replies
  • 0 kudos

numpy.ndarray size changed, may indicate binary incompatibility

Hi All,I have installed the following libraries on my cluster (11.3 LTS that includes Apache Spark 3.3.0, Scala 2.12):numpy==1.21.4flair==0.12 ‎on executing `from flair.models import TextClassifier`, I get the following error:"numpy.ndarray size chan...

  • 1951 Views
  • 2 replies
  • 0 kudos
Latest Reply
sean_owen
Honored Contributor II
  • 0 kudos

You have changed the numpy version, and presumably that is not compatible with other libraries in the runtime. If flair requires later numpy, then use a later DBR runtime for best results, which already has later numpy versions

  • 0 kudos
1 More Replies
niruban
by New Contributor II
  • 109 Views
  • 1 replies
  • 0 kudos

Databricks Asset Bundle Behaviour for new workflows and existing workflows

Dear Community Members -I am trying to deploy a workflow using DAB. After deploying if I am updating the same workflow with different bundle name it is creating a new workflow instead of updating the existing workflow. Also when I am trying to use sa...

  • 109 Views
  • 1 replies
  • 0 kudos
Latest Reply
niruban
New Contributor II
  • 0 kudos

@nicole_lu_PM : Do you have any suggestions or feedback for the above question ? It will be really helpful if we can get some insights. 

  • 0 kudos
pdemeulenaer
by New Contributor
  • 103 Views
  • 1 replies
  • 0 kudos

Databricks asset bundles dependencies

Is anyone aware of a way to include a requirements.txt within the job definition of a databricks asset bundle? Documentation mentions how to have dependencies in workspace files, or Unity Catalog volumes, but I wanted to ask if it is possible to decl...

Community Discussions
databricksassetbundles
Dependency
  • 103 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @pdemeulenaer, When working with Databricks Asset Bundles, you can specify library dependencies for local development in the requirements*.txt file at the root of the bundle project. However, job task library dependencies are declared in your bund...

  • 0 kudos
PragyaS
by New Contributor
  • 119 Views
  • 1 replies
  • 0 kudos

Creating Zip file on Azure storage explorer

I need to create zip file which will contain csv files which I have created from the dataframe. But I am unable to create valid zip file which will be containing all csv files. Is it possible to create zip file from code in databricks on azure storag...

  • 119 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @PragyaS,  References: Creating Zip file on Azure storage explorer - Databricks CommunityHow to zip files (on Azure Blob Storage) with shutil in DatabricksWorking with Unity Catalog in Azure DatabricksAnnouncing the General Availability of Unity C...

  • 0 kudos
nileshtiwaari
by New Contributor
  • 65 Views
  • 1 replies
  • 0 kudos

Spark structured streaming

hi,could someone please help me with this code :-input parameter df is a spark structured streaming dataframe def apply_duplicacy_check(df, duplicate_check_columns):    if len(duplicate_check_columns) == 0:         return None, df    valid_df = df.dr...

  • 65 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @nileshtiwaari , The error message you’re encountering indicates that the .exceptAll() operation is not supported on streaming DataFrames. In structured streaming, certain operations have limitations due to the nature of streaming data. To addr...

  • 0 kudos
bamvallar
by New Contributor
  • 71 Views
  • 1 replies
  • 0 kudos
  • 71 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @bamvallar, Welcome to the Databricks Community! We're thrilled to have you here and thank you for attending the DAIS and visiting our community booth. Your participation means a lot to us, and we're excited to see the contributions you'll bring ...

  • 0 kudos
Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!