cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

mac08_flo
by New Contributor
  • 417 Views
  • 1 replies
  • 0 kudos

Creación de log

Buenas tardes.Estoy intentando agregar logs en la creación de mi código.El detalle es que aún no encuentro la manera de poder ingresar los logsen un archivo independiente, no que salga desde la terminal, si no,que se almacene en un archivo (example.l...

  • 417 Views
  • 1 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

Para almacenar logs en un archivo en lugar de la terminal, puedes utilizar la configuración básica de logging en Python. A continuación, te muestro un ejemplo de cómo hacerlo: import logging # Configuración básica del logging logging.basicConfig( fi...

  • 0 kudos
costi9992
by New Contributor III
  • 380 Views
  • 1 replies
  • 0 kudos

Missing Fields in Databricks REST API Documentation & SDK Due to OpenAPI Spec Gaps

Hi Community,I've been working with the Databricks REST APIs and noticed some inconsistencies between the API documentation and the actual API responses. Specifically, there are a few fields returned in the API responses that are not documented but a...

  • 380 Views
  • 1 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

Hello thanks for your question, in regards the last_time_activity and disk_spec this fields have been deprecated and this is the reason why it is no longer showing in the API docs, you can refer to https://kb.databricks.com/clusters/databricks-api-la...

  • 0 kudos
Leszek1
by New Contributor II
  • 310 Views
  • 1 replies
  • 0 kudos

Workflow job tasks waits

Hi,I'm having issues with Workflow Pipelines since 3-4 days. The performance is degraded and very strange behavior of the Pipeline is that Tasks waits ~2-3 minutes to start executing code in the Notebook.This is visible when you look at one of the ta...

  • 310 Views
  • 1 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

Hello are you still behaving this issue? Are you counting the time it is taking for the cluster to start up or cluster was already running or using Serverless?

  • 0 kudos
flamezi2
by New Contributor
  • 329 Views
  • 1 replies
  • 0 kudos

Invalid request when using the Manual generation of an account-level access token

I need to generate access token using REST API and was using the guide seen here:manually-generate-an-account-level-access-tokenWhen i try this cURL in postman, i get an error but the error description is not helpfulError: I don't know what I'm missi...

flamezi2_1-1727934079195.png flamezi2_0-1727934045043.png
  • 329 Views
  • 1 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

Are you replacing the Account_id with your actual account id associated with your subscription? Also what token are you using to authenticate or run this API call?

  • 0 kudos
GodSpeed
by New Contributor
  • 424 Views
  • 1 replies
  • 0 kudos

Postman Collection Alternatives for Data-Centric API Management?

I’ve been using Postman collections to manage APIs in my data projects, but I’m exploring alternatives. Are there tools like Apidog or Insomnia that perform better for API management, particularly when working with large data sets or data-driven work...

  • 424 Views
  • 1 replies
  • 0 kudos
Latest Reply
Walter_C
Databricks Employee
  • 0 kudos

Insomnia: Insomnia is another strong alternative that is frequently recommended. It is known for its simplicity and effectiveness in making REST API requests. Insomnia supports the import of Postman collections and is praised for its performance and ...

  • 0 kudos
Jcowell
by New Contributor II
  • 241 Views
  • 2 replies
  • 0 kudos

Is Limit input rate Docs not correct?

In databricks docs it says "If you use maxBytesPerTrigger in conjunction with maxFilesPerTrigger, the micro-batch processes data until either the maxFilesPerTrigger or maxBytesPerTrigger limit is reached."But based on the source code this is not true...

  • 241 Views
  • 2 replies
  • 0 kudos
Latest Reply
ozaaditya
Contributor
  • 0 kudos

In my opinion, the reason for not using both options simultaneously is that the framework would face a logical conflict:Should it stop reading after the maximum number of files is reached, even if the size limit hasn’t been exceeded?OrShould it stop ...

  • 0 kudos
1 More Replies
183530
by New Contributor III
  • 1482 Views
  • 2 replies
  • 2 kudos

How to search an array of words in a text field

Example:TABLE 1FIELD_TEXTI like salty food and Italian foodI have Italian foodbread, rice and beansmexican foodscoke, spritearray['italia', 'mex','coke']match TABLE1 X ARRAYResults:I like salty food and Italian foodI have Italian foodmexican foodsis ...

  • 1482 Views
  • 2 replies
  • 2 kudos
Latest Reply
Meredithharper
New Contributor II
  • 2 kudos

Yes, you can do it in SQL with LIKE or IN and in PySpark using array contains, ideal for filtering Words like halal catering Barcelona, catering, and many more

  • 2 kudos
1 More Replies
KrzysztofPrzyso
by New Contributor III
  • 1302 Views
  • 1 replies
  • 3 kudos

Best Practices for Copying Data Between Environments

Hi Everyone,I'd like to start a discussion about the best practices for copying data between environments. Here's a typical setup:Environment Setup:The same region and metastore (Unity Catalog) is used across environments.Each environment has a singl...

  • 1302 Views
  • 1 replies
  • 3 kudos
Latest Reply
Sidhant07
Databricks Employee
  • 3 kudos

Using CTAS (CREATE TABLE AS SELECT) might be a more robust solution for your use case: Independence: CTAS creates a new, independent copy of the data, avoiding dependencies on the source tableSimplified access control: Access rights can be managed so...

  • 3 kudos
arthurburkhardt
by New Contributor
  • 520 Views
  • 2 replies
  • 0 kudos

Auto Loader changes the order of columns when infering JSON schema (sorted lexicographically)

We are using Auto Loader to read json files from S3 and ingest data into the bronze layer. But it seems auto loader struggles with schema inference and instead of preserving the order of columns from the JSON files, it sorts them lexicographically.Fo...

Data Engineering
auto.loader
json
schema
  • 520 Views
  • 2 replies
  • 0 kudos
Latest Reply
Sidhant07
Databricks Employee
  • 0 kudos

Auto Loader's default behavior of sorting columns lexicographically during schema inference is indeed a limitation when preserving the original order of JSON fields is important. Unfortunately, there isn't a built-in option in Auto Loader to maintain...

  • 0 kudos
1 More Replies
simple89
by New Contributor
  • 319 Views
  • 1 replies
  • 0 kudos

Runtime increases exponentially from 11.3 to 13.3

Hello. I am using R on databricks and using the below approach. My Spark version:Single node: i3.2xlarge · On-demand · DBR: 11.3 LTS (includes Apache Spark 3.3.0, Scala 2.12) · us-east-1a, the job takes 1 hourI install all R packages (including a geo...

  • 319 Views
  • 1 replies
  • 0 kudos
Latest Reply
Sidhant07
Databricks Employee
  • 0 kudos

Hello! It's possible that the increase in runtime when upgrading from Spark 3.3.0 (DBR 11.3) to Spark 3.4.1 (DBR 13.3) is due to changes in the underlying R runtime or package versions. When you upgrade to a new version of Spark, the R packages that ...

  • 0 kudos
rcostanza
by New Contributor III
  • 363 Views
  • 1 replies
  • 1 kudos

Changing a Delta Live Table's schema

I have a Delta Live Table whose source is a Kafka stream. One of the columns is a Decimal and I need to change its precision.What's the correct approach to changing the DLT's schema?Just changing the column's precision in the DLT definition will resu...

  • 363 Views
  • 1 replies
  • 1 kudos
Latest Reply
Sidhant07
Databricks Employee
  • 1 kudos

To change the precision of a Decimal column in a Delta Live Table (DLT) with a Kafka stream source, you can follow these steps: 1. Create a new column in the DLT with the desired precision.2. Copy the data from the old column to the new column.3. Dro...

  • 1 kudos
lprevost
by Contributor
  • 290 Views
  • 1 replies
  • 0 kudos

sampleBy stream in DLT

I would like to create a sampleBy (stratified version of sample) copy/clone of my delta table.   Ideally, I'd like to do this using a DLT.     My source table grows incrementally each month as batch files are added and autoloader picks them up.    Id...

  • 290 Views
  • 1 replies
  • 0 kudos
Latest Reply
Sidhant07
Databricks Employee
  • 0 kudos

You can create a stratified sample of your delta table using the `sampleBy` function in Databricks. However, DLT  does not support the `sampleBy` function directly. To work around this, you can create a notebook that uses the `sampleBy` function to c...

  • 0 kudos
zmwaris1
by New Contributor II
  • 247 Views
  • 1 replies
  • 2 kudos

Connect databricks delta table to Apache Kyln using JDBC

I am using Apache Kylin for Data Analytics and Databricks for data modelling and filtering. I have my final data in gold tables and I would like to integrate this data with Apache Kylin using JDBC where the gold table will be the Data Source. I would...

  • 247 Views
  • 1 replies
  • 2 kudos
Latest Reply
Sidhant07
Databricks Employee
  • 2 kudos

Yes, it is possible to integrate your Databricks gold tables with Apache Kylin using JDBC. This integration allows you to use Apache Kylin's OLAP capabilities on the data stored in your Databricks environment. Here's how you can achieve this: ## Conn...

  • 2 kudos
YOUKE
by New Contributor II
  • 364 Views
  • 4 replies
  • 1 kudos

Resolved! Managed Tables on Azure databricks

Hi everyone,I was trying to understand: when a managed table is created, Databricks stores the metadata in the Hive metastore and the data in the cloud storage managed by it, which in the case of Azure Databricks will be an Azure Storage Account. But...

  • 364 Views
  • 4 replies
  • 1 kudos
Latest Reply
BraydenJordan
New Contributor II
  • 1 kudos

Thank you so much for the solution.

  • 1 kudos
3 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels