cancel
Showing results for 
Search instead for 
Did you mean: 
Community Platform Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

pshuk
by New Contributor III
  • 2052 Views
  • 1 replies
  • 0 kudos

How to ingest files from volume using autoloader

I am doing a test run.  I am uploading files to a volume and then using autoloader to ingesgt files and creating a table. I am getting this error message:-----------------------------------------------------------com.databricks.sql.cloudfiles.errors....

  • 2052 Views
  • 1 replies
  • 0 kudos
Latest Reply
Wojciech_BUK
Valued Contributor III
  • 0 kudos

Hey, i think you are mixing DLT syntaxt with pyspark syntax:In DLT you should use:CREATE OR REFRESH STREAMING TABLE <table-name> AS SELECT * FROM STREAM read_files( '<path-to-source-data>', format => '<file-format>' )or in Python@dlt....

  • 0 kudos
SamGreene
by Contributor
  • 1909 Views
  • 2 replies
  • 1 kudos

Resolved! Power BI keeps SQL Warehouse Running

Hi,I have a SQL Warehouse, serverless mode, set to shut down after 5 minutes.  Using the databricks web IDE, this works as expected.  However, if I connect Power BI, import data to PBI and then leave the application open, the SQL Warehouse does not s...

  • 1909 Views
  • 2 replies
  • 1 kudos
Latest Reply
SamGreene
Contributor
  • 1 kudos

Repeating this test today, the SQL Warehouse shut down properly.  Thanks for your helpful reply. 

  • 1 kudos
1 More Replies
Ramakrishnan83
by New Contributor III
  • 345 Views
  • 0 replies
  • 0 kudos

How to Migrate specific notebooks from one Azure Repo to another Azure Repo

Team,I need to migrate only specific notebooks which has changes committed to be pulled from one repo to another repoEnvironment/Repo Setup:Master  -> Dev -> Feature Branch -> Developer commits the code in Feature Branch -> Dev has the changes from D...

  • 345 Views
  • 0 replies
  • 0 kudos
OU_Professor
by New Contributor II
  • 950 Views
  • 1 replies
  • 1 kudos

Resolved! Is the Databricks Community Edition still available?

Hello,I am a professor of IT in the Price College of Business at The University of Oklahoma.  In the past, I have used the Databricks Community Edition to demonstrate the principles of building and maintaining a data warehouse.  This spring semester ...

  • 950 Views
  • 1 replies
  • 1 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 1 kudos

Hi @OU_Professor, The Databricks Community Edition is the free version of the cloud-based big Databricks Platform that allows users to access a micro-cluster, cluster manager, and notebook environment. It also comes with a rich portfolio of award-win...

  • 1 kudos
xfun
by New Contributor II
  • 1010 Views
  • 2 replies
  • 0 kudos

SQL Warehouse cluster is always running when configure metabase conneciton

I encountered an issue while using the Metabase JDBC driver to connect to Databricks SQL Warehouse: I noticed that the SQL Warehouse cluster is always running and never stops automatically. Every few seconds, a SELECT 1 query log appears, which I sus...

  • 1010 Views
  • 2 replies
  • 0 kudos
Latest Reply
xfun
New Contributor II
  • 0 kudos

Will try to remove "preferredTestQuery" or add "idleConnectionTestPeriod" to avoid keep send select 1 query. 

  • 0 kudos
1 More Replies
pablobd
by Contributor II
  • 810 Views
  • 3 replies
  • 1 kudos

utils.add_libraries_to_model creates a duplicated model

Hello,When I call this function,mlflow.models.utils.add_libraries_to_model(MODEL_URI)It register a new model into the Model Registry. Is it possible to do the same but without registering a new model?Thanks,

  • 810 Views
  • 3 replies
  • 1 kudos
Latest Reply
pablobd
Contributor II
  • 1 kudos

I ended up publishing the library to AWS CodeArtifact repository. Now, how can I tell MLFlow to use AWS CodeArtifact private repository instead of PyPi?

  • 1 kudos
2 More Replies
JordiDekker
by New Contributor III
  • 2189 Views
  • 2 replies
  • 0 kudos

ClassCastException when attempting to timetravel (databricks-connect)

Hi all,Using databricks-connect 11.3.19, I get an "java.lang.ClassCastException" when attempting to timetravel. The exact same statement works fine when executed in the databricks GUI directly. Any ideas on what's going on? Is this a known limitation...

  • 2189 Views
  • 2 replies
  • 0 kudos
Latest Reply
SusanaD
New Contributor II
  • 0 kudos

Did you find a solution?

  • 0 kudos
1 More Replies
kartik-chandra
by New Contributor III
  • 1589 Views
  • 2 replies
  • 0 kudos

Resolved! Spark read with format as "delta" isn't working with Java multithreading

0I have a Spark application (using Java library) which needs to replicate data from one blob storage to another. I have created a readStream() within it which is listening continuously to a Kafka topic for incoming events. The corresponding writeStre...

  • 1589 Views
  • 2 replies
  • 0 kudos
Latest Reply
kartik-chandra
New Contributor III
  • 0 kudos

The problem was indeed with the way ClassLoader was being set in the ForkJoinPool (common Pool used) thread. Spark in SparkClassUtils uses Thread.currentThread().getContextClassLoader which might behave differently in another thread.To solve it I cre...

  • 0 kudos
1 More Replies
A1459
by New Contributor
  • 669 Views
  • 0 replies
  • 0 kudos

Execute delete query from notebook on azure synapse

Hello Everyone, Is there a way we can execute the delete query from azure notebook on azure synapse database.I tried using read api method with option "query" but getting error like jdbc connector not able to handle code.Can any suggest how we can de...

  • 669 Views
  • 0 replies
  • 0 kudos
Sujitha
by Community Manager
  • 9876 Views
  • 1 replies
  • 5 kudos

New how-to guide to data warehousing with the Data Intelligence Platform

Just launched: The Big Book of Data Warehousing and BI, a new hands-on guide focused on real-world use cases from governance, transformation, analytics and AI.As the demand for data becomes insatiable in every company, the data infrastructure has bec...

Screenshot 2023-12-05 at 11.04.08 AM.png
  • 9876 Views
  • 1 replies
  • 5 kudos
Latest Reply
Edward3
New Contributor II
  • 5 kudos

lol beans It used to take me a long time to regain my equilibrium, but recently I learned that a website really leads this layout when you may find delight after a stressful day here. Since then, I've been able to find my equilibrium much more quickl...

  • 5 kudos
invalidargument
by New Contributor III
  • 792 Views
  • 2 replies
  • 2 kudos

Create new workbooks with code

Is is possible to create new notebooks from a notevbook in databricks? I have tried this code. But all of them are generic files, not notebooks.notebook_str = """# Databricks notebook source import pyspark.sql.functions as F import numpy as np # CO...

  • 792 Views
  • 2 replies
  • 2 kudos
Latest Reply
invalidargument
New Contributor III
  • 2 kudos

Unfortunaly %run does not help me since I can't %run a .py file. I still need my code in notebooks.I am transpiling propriatary code to python using jinja templates. I would like to have the output as notebooks since those are most convenient to edit...

  • 2 kudos
1 More Replies
SamGreene
by Contributor
  • 1448 Views
  • 1 replies
  • 0 kudos

Resolved! DLT Pipeline Graph is not detecting dependencies

Hi,This is my first databricks project.  I am loading data from a UC external volume in ADLS into tables and then split one of the tables into two tables based on a column.  When I create a pipeline, the tables don't have any dependencies and this is...

  • 1448 Views
  • 1 replies
  • 0 kudos
Latest Reply
SamGreene
Contributor
  • 0 kudos

While re-implementing my pipeline to publish to dev/test/prod instead of bronze/silver/gold, I think I found the answer.  The downstream tables need to use the LIVE schema. 

  • 0 kudos
SamGreene
by Contributor
  • 1521 Views
  • 2 replies
  • 1 kudos

Resolved! Unpivoting data in live tables

I am loading data from CSV into live tables.  I have a live delta table with data like this:WaterMeterID, ReadingDateTime1, ReadingValue1, ReadingDateTime2, ReadingValue2It needs to be unpivoted into this:WaterMeterID, ReadingDateTime1, ReadingValue1...

  • 1521 Views
  • 2 replies
  • 1 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 1 kudos

Hi @SamGreene, The stack function allows you to unpivot columns by rotating their values into rows. It’s available both in Scala and PySpark.

  • 1 kudos
1 More Replies
stackoftuts
by New Contributor
  • 436 Views
  • 0 replies
  • 0 kudos

AI uses

Delve into the transformative realm of AI applications, where innovation merges seamlessly with technology's limitless possibilities.Explore the multifaceted landscape of AI uses and its dynamic impact on diverse industries at StackOfTuts. 

  • 436 Views
  • 0 replies
  • 0 kudos

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Top Kudoed Authors