cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

haseebkhan1421
by New Contributor
  • 2565 Views
  • 1 replies
  • 3 kudos

How can I create a column on the fly which would have same value for all rows in spark sql query

I have a SQL query which I am converting into spark sql in azure databricks running in my jupyter notebook. In my SQL query, a column named Type is created on the fly which has value 'Goal' for every row:SELECT Type='Goal', Value FROM tableNow, when...

  • 2565 Views
  • 1 replies
  • 3 kudos
Latest Reply
Ryan_Chynoweth
Esteemed Contributor
  • 3 kudos

The correct syntax would be: SELECT 'Goal' AS Type, Value FROM table

  • 3 kudos
maheshwor
by New Contributor III
  • 1268 Views
  • 1 replies
  • 2 kudos

Resolved! Databricks Views

How do we find the definition of View in databricks?

  • 1268 Views
  • 1 replies
  • 2 kudos
Latest Reply
Ryan_Chynoweth
Esteemed Contributor
  • 2 kudos

You can use the extended table description. For example, the following python code will print the current definition of the view: table_name = "" df = spark.sql("describe table extended {}".format(table_name)) df.createOrReplaceTempView("view_desript...

  • 2 kudos
TimothyClotwort
by New Contributor
  • 4187 Views
  • 1 replies
  • 0 kudos

SQL Alter table command not working for me

I am a novice with databricks. I am performing some independent learning. I am trying to add a column to an existing table. Here is my syntax: %sql ALTER TABLE car_parts ADD COLUMNS (engine_present boolean) which returns the error:SyntaxError: inva...

  • 4187 Views
  • 1 replies
  • 0 kudos
Latest Reply
Ryan_Chynoweth
Esteemed Contributor
  • 0 kudos

Is the table you are working with in the Delta format? The table commands (i.e. Alter) do not work for all storage formats. For example if I run the following commands then I can alter a table. Note - there is no data in the table but the table exist...

  • 0 kudos
rami1
by New Contributor II
  • 1724 Views
  • 1 replies
  • 0 kudos

Missing Databricks Datasets

Hi, I am looking at my Databricks workspace and it looks like I am missing DBFS Databricks-dataset root folder. The dbfs root folders I can view are FileStore, local_disk(),mnt, pipelines and user. Can I mount Databricks-dataset or am I missing some...

  • 1724 Views
  • 1 replies
  • 0 kudos
Latest Reply
Ryan_Chynoweth
Esteemed Contributor
  • 0 kudos

If you run the following command do you receive an error? Or do you just get an empty list?dbutils.fs.ls("/databricks-datasets")

  • 0 kudos
NickGoodfella
by New Contributor
  • 1758 Views
  • 1 replies
  • 1 kudos

DNS_Analytics Notebook Problems

Hello everyone! First post on the forums, been stuck at this for awhile now and cannot seem to understand why this is happening. Basically, I have been using a seems to be premade Databricks notebook from Databricks themselves for a DNS Analytics exa...

  • 1758 Views
  • 1 replies
  • 1 kudos
Latest Reply
sean_owen
Databricks Employee
  • 1 kudos

@NickGoodfella​ , What's the notebook you're looking at, this one? https://databricks.com/notebooks/dns-analytics.html Are you sure all the previous cells executed? this is suggesting there isn't a model at the path that's expected. You can take a lo...

  • 1 kudos
User16826994223
by Honored Contributor III
  • 1168 Views
  • 1 replies
  • 0 kudos

The State in-stream is growing too large in stream

I have a customer with a streaming pipeline from Kafka to Delta. They are leveraging RocksDB, watermarking for 30min and attempting to dropDuplicates. They are seeing their state grow to 6.2 billion rows--- on a stream that hits at maximum 7000 rows ...

  • 1168 Views
  • 1 replies
  • 0 kudos
Latest Reply
shaines
New Contributor II
  • 0 kudos

I've seen a similar issue with large state using flatMapGroupsWithState. It is possible that A.) they are not using the state.setTimeout correctly or B.) they are not calling state.remove() when the stored state has timed out, leaving the state to gr...

  • 0 kudos
PadamTripathi
by New Contributor II
  • 5499 Views
  • 2 replies
  • 1 kudos

how to calculate median on azure databricks delta table using sql

how to calculate median on delta tables in azure databricks using sql ? select col1, col2, col3, median(col5) from delta table group by col1, col2, col3

  • 5499 Views
  • 2 replies
  • 1 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 1 kudos

try with the percentile function, as median = percentile 50: https://spark.apache.org/docs/latest/api/sql/#percentile

  • 1 kudos
1 More Replies
AlexDavies
by Contributor
  • 2253 Views
  • 1 replies
  • 0 kudos

Genrated partition column not being used by optimizer

We have created a table using the new generated column feature (https://docs.microsoft.com/en-us/azure/databricks/delta/delta-batch#deltausegeneratedcolumns) CREATE TABLE ingest.MyEvent( data binary, topic string, timestamp timestamp, date dat...

  • 2253 Views
  • 1 replies
  • 0 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 0 kudos

I think you have to pass a date in your select query instead of a timestamp.The generated column will indeed derive a data from the timestamp and partition by it. But the docs state: When you write to a table with generated columns and you do not ex...

  • 0 kudos
irfanaziz
by Contributor II
  • 1426 Views
  • 1 replies
  • 1 kudos

What could be the issue with parquet file?

when trying to update or display the dataframe, one of the parquet files is having some issue, "Parquet column cannot be converted. Expected: DecimalType(38,18), Found: DOUBLE" What could be the issue?

  • 1426 Views
  • 1 replies
  • 1 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 1 kudos

try to explicitly cast the double column to dec(38,18) and then do the display.

  • 1 kudos
dbsuersu
by New Contributor II
  • 1522 Views
  • 1 replies
  • 0 kudos

"dbfs:" quote added as a prefix to file path

There is a mount path /mnt/folder I am passing filename as a variable from another function and completing the path variable as follows: filename=file.txt path=/mnt/folder/subfolder/+filename When I'm trying to use the path variable is a function, f...

  • 1522 Views
  • 1 replies
  • 0 kudos
Latest Reply
-werners-
Esteemed Contributor III
  • 0 kudos

databricks uses the databricks file system (dbfs) by default. So my guess is you did not mount the path in databricks.

  • 0 kudos
reza-eghbali
by New Contributor
  • 1314 Views
  • 0 replies
  • 0 kudos

Kafka consumer and a web server simultaneously, thread blocking problem in microservice

assumptions: There are microservices behind an api-gateway, they communicate through HTTP synchronously. obviously, each one of those microservices is a web server. now I want my microservice to play as a Kafka producer and "consumer" too. more clea...

  • 1314 Views
  • 0 replies
  • 0 kudos
JoãoRafael
by New Contributor II
  • 3137 Views
  • 3 replies
  • 0 kudos

Double job execution caused by databricks' RemoteServiceExec using databricks-connector

Hello! I'm using databricks-connector to launch spark jobs using python. I've validated that the python version (3.8.10) and runtime version (8.1) are supported by the installed databricks-connect (8.1.10). Everytime a mapPartitions/foreachParti...

0693f000007OoMBAA0 0693f000007OoMAAA0
  • 3137 Views
  • 3 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

A community forum to discuss working with Databricks Cloud and Spark. ... Double job execution caused by databricks' RemoteServiceExec using databrick.MyBalanceNow

  • 0 kudos
2 More Replies
Kotofosonline
by New Contributor III
  • 1194 Views
  • 1 replies
  • 0 kudos

Bug Report: Date type with year less than 1000 (years 1-999) in spark sql where [solved]

Hi, I noticed unexpected behavior for Date type. If year value is less then 1000 then filtering do not work. Steps:create table test (date Date); insert into test values ('0001-01-01'); select * from test where date = '0001-01-01' Returns 0 rows....

  • 1194 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kotofosonline
New Contributor III
  • 0 kudos

Hm, seems to work now.

  • 0 kudos

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels