cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

dbengineer516
by New Contributor
  • 44 Views
  • 1 replies
  • 0 kudos

/api/2.0/preview/sql/queries API only returning certain queries

Hello,When using /api/2.0/preview/sql/queries to list out all available queries, I noticed that certain queries were being shown while others were not. I did a small test on my home workspace, and it was able to recognize certain queries when I defin...

  • 44 Views
  • 1 replies
  • 0 kudos
Latest Reply
brockb
New Contributor III
  • 0 kudos

Hi,How many queries were returned in the API call in question? The List Queries documentation describes this endpoint as supporting pagination with a default page size of 25, is that how many you saw returned? Query parameters page_size integer <= 10...

  • 0 kudos
prabhu26
by Visitor
  • 43 Views
  • 1 replies
  • 0 kudos

Unable to enforce schema on data read from jsonl file in Azure Databricks using pyspark

I'm tring to build a ETL pipeline in which I'm reading the jsonl files from the azure blob storage, then trying to transform and load it to delta tables in databricks. I have created the below schema for loading my data :  schema = StructType([ S...

  • 43 Views
  • 1 replies
  • 0 kudos
Latest Reply
DataEngineer
New Contributor II
  • 0 kudos

Try this.Add option("multiline","true")

  • 0 kudos
MarkD
by New Contributor II
  • 377 Views
  • 8 replies
  • 0 kudos

SET configuration in SQL DLT pipeline does not work

Hi,I'm trying to set a dynamic value to use in a DLT query, and the code from the example documentation does not work.SET startDate='2020-01-01'; CREATE OR REFRESH LIVE TABLE filtered AS SELECT * FROM my_table WHERE created_at > ${startDate};It is g...

Data Engineering
Delta Live Tables
dlt
sql
  • 377 Views
  • 8 replies
  • 0 kudos
Latest Reply
Hkesharwani
New Contributor III
  • 0 kudos

Hi @MarkD ,You may use  set variable_name.var= '1900-01-01'to set the value of variable and in order to use the value of variable use ${automated_date.var} Example: set automated_date.var= '1800-01-01' select * from my table where date = CAST(${autom...

  • 0 kudos
7 More Replies
pshuk
by New Contributor III
  • 104 Views
  • 2 replies
  • 1 kudos

upload file/table to delta table using CLI

Hi,I am using CLI to transfer local files to Databricks Volume. At the end of my upload, I want to create a meta table (storing file name, location, and some other information) and have it as a table on databricks Volume. I am not sure how to create ...

  • 104 Views
  • 2 replies
  • 1 kudos
Latest Reply
Ayushi_Suthar
Honored Contributor
  • 1 kudos

Hi @pshuk , Greetings!  We understand that you are looking for a CLI command to create a Table but at this moment Databricks doesn't support CLI command to create the table but you can use SQL Execution API -https://docs.databricks.com/api/workspace/...

  • 1 kudos
1 More Replies
JOFinancial
by Visitor
  • 42 Views
  • 1 replies
  • 0 kudos

No Data for External Table from Blob Storage

Hi All,I am trying to create an external table from a Azure Blob storage container.  I recieve no errors, but there is no data in the table.  The Blob Storage contains 4 csv files with the same columns and about 10k rows of data.  Am I missing someth...

  • 42 Views
  • 1 replies
  • 0 kudos
Latest Reply
Hkesharwani
New Contributor III
  • 0 kudos

Hi, The code looks completely fine. please check if you have any other delimiter other than , .If your CSV files use a different delimiter, you can specify it in the table definition using the OPTIONS clause.Just to confirm I created a sample table a...

  • 0 kudos
TinasheChinyati
by New Contributor
  • 1645 Views
  • 2 replies
  • 0 kudos

Is databricks capable of housing OLTP and OLAP?

Hi data experts.I currently have an OLTP (Azure SQL DB) that keeps data only for the past 14 days. We use Partition switching to achieve that and have an ETL (Azure data factory) process that feeds the Datawarehouse (Azure Synapse Analytics). My requ...

  • 1645 Views
  • 2 replies
  • 0 kudos
Latest Reply
ChrisCkx
New Contributor II
  • 0 kudos

Hi @Kaniz I have looked at this topic extensively and have even tried to implement it.I am a champion of databricks at my organization, but I do not think that it currently enables the OLTP scenarios.The closest I have gotten to it is by using the St...

  • 0 kudos
1 More Replies
dbal
by New Contributor III
  • 518 Views
  • 2 replies
  • 0 kudos

withColumnRenamed does not work with databricks-connect 14.3.0

I am not able to run our unit tests suite due a possible bug in the databricks-connect library. The problem is with the Dataframe transformation withColumnRenamed. When I run it in a Databricks cluster (Databricks Runtime 14.3 LTS), the column is ren...

dbal_3-1715382511871.png dbal_4-1715382516217.png dbal_1-1715383269610.png
  • 518 Views
  • 2 replies
  • 0 kudos
Latest Reply
shan_chandra
Esteemed Contributor
  • 0 kudos

@dbal - can you please try withColumnsRenamed() instead Reference: https://docs.databricks.com/en/release-notes/dbconnect/index.html#databricks-connect-1430-python

  • 0 kudos
1 More Replies
Sushmg
by Visitor
  • 620 Views
  • 1 replies
  • 0 kudos

Call rest api

Hi there is requirements to create a pipeline that calls api and store that data in datawarehouse. Can you suggest me the best way to do this

  • 620 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Sushmg, Please refer to the Databricks documentation and resources for more detailed instructions and examples.

  • 0 kudos
Dhruv-22
by New Contributor III
  • 27 Views
  • 0 replies
  • 0 kudos

NamedStruct fails in the 'IN' query

I've posted the same question on stackoverflow (link) as well. I will post any solution I get there.I was trying to understand using many columns in the IN query and came across this statement. SELECT (1, 2) IN (SELECT c1, c2 FROM VALUES(1, 2), (3, 4...

  • 27 Views
  • 0 replies
  • 0 kudos
StephanKnox
by Visitor
  • 33 Views
  • 1 replies
  • 1 kudos

Parametrized SQL - Pass column names as a parameter?

Hi all, Is there a way to pass a column name(not a value) in a parametrized Spark SQL query?I am trying to do it like so, however it does not work as I think column name get expanded like 'value' i.e. surrounded by single quotes: def count_nulls(df:D...

  • 33 Views
  • 1 replies
  • 1 kudos
Latest Reply
Kaniz
Community Manager
  • 1 kudos

Hi @StephanKnox , You can use string interpolation (f-strings) to dynamically insert the column name into your query.

  • 1 kudos
Dhruv-22
by New Contributor III
  • 209 Views
  • 2 replies
  • 0 kudos

Understanding least common type in databricks

I was reading the data type rules and found about least common type.I have a doubt. What is the least common type of STRING and INT? The referred link gives the following example saying the least common type is BIGINT.-- The least common type between...

  • 209 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz
Community Manager
  • 0 kudos

Hi @Dhruv-22, The concept of the least common type can indeed be a bit tricky, especially when dealing with different data types like STRING and INT. Let’s dive into this and clarify the behaviour in Apache Sparkâ„¢ and Databricks. Coalesce Functi...

  • 0 kudos
1 More Replies
SparkMaster
by New Contributor III
  • 3674 Views
  • 10 replies
  • 1 kudos

Why can't I delete experiments without deleting the notebook? Or better Organize experiments into folders?

My Databricks Experiments is cluttered with a whole lot of experiments. Many of them are notebooks which are showing there for some reason (even though they didn't have an MLflow run associated with it). I would like to delete the experiments, but it...

  • 3674 Views
  • 10 replies
  • 1 kudos
Latest Reply
mhiltner
New Contributor II
  • 1 kudos

Hey @Debayan @SparkMaster  A bit late here, but I believe this is being caused by a click on the right side experiments icon. This may look like a meaningless click but it actually triggers a run. 

  • 1 kudos
9 More Replies
210227
by New Contributor III
  • 66 Views
  • 1 replies
  • 0 kudos

Resolved! External table from external location

Hi, I'm creating external table from existing external location and am a bit puzzled as to what permissions I need for it or what is the correct way of defining the S3 path with wildcards. This:create external table if not exists test_catalogue_dev.b...

  • 66 Views
  • 1 replies
  • 0 kudos
Latest Reply
210227
New Contributor III
  • 0 kudos

Just for the reference, the wildcard is not needed in this case, just a misleading error message. In this case 's3://test-data/full/2023/01/' instead of 's3://test-data/full/2023/01/*/' was the correct PATH

  • 0 kudos
Labels
Top Kudoed Authors