cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 
Data + AI Summit 2024 - Data Engineering & Streaming

Forum Posts

emanuele_maffeo
by New Contributor III
  • 2990 Views
  • 5 replies
  • 8 kudos

Resolved! Trigger.AvailableNow on scala - compile issue

Hi everybody,Trigger.AvailableNow is released within the databricks 10.1 runtime and we would like to use this new feature with autoloader.We write all our data pipeline in scala and our projects import spark as a provided dependency. If we try to sw...

  • 2990 Views
  • 5 replies
  • 8 kudos
Latest Reply
Anonymous
Not applicable
  • 8 kudos

You can switch to python. Depending on what you're doing and if you're using UDFs, there shouldn't be any difference at all in terms of performance.

  • 8 kudos
4 More Replies
alonisser
by Contributor
  • 2475 Views
  • 3 replies
  • 4 kudos

Resolved! How to migrate an existing workspace for an external metastore

Currently we're on an azure databricks workspace, we've setup during the POC, a long time ago. In the meanwhile we have built quite a production workload above databricks.Now we want to split workspaces - one for analysts and one for data engineeri...

  • 2475 Views
  • 3 replies
  • 4 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 4 kudos

From databricks notebook just run mysqldump. Server address and details you can take from logs or configuration.I am including also link to example notebook https://docs.microsoft.com/en-us/azure/databricks/kb/_static/notebooks/2016-election-tweets.h...

  • 4 kudos
2 More Replies
USHAK
by New Contributor II
  • 851 Views
  • 1 replies
  • 0 kudos

Hi , I am trying to schedule - Exam: Databricks Certified Associate Developer for Apache Spark 3.0 - Python.In the cart --> I couldn't proceed ...

Hi , I am trying to schedule - Exam: Databricks Certified Associate Developer for Apache Spark 3.0 - Python.In the cart --> I couldn't proceed without entering voucher. I do not have voucher.Please help

  • 851 Views
  • 1 replies
  • 0 kudos
Latest Reply
USHAK
New Contributor II
  • 0 kudos

Can someone Please respond to my above question ? Can i write certification test without Voucher ?

  • 0 kudos
laus
by New Contributor III
  • 6930 Views
  • 5 replies
  • 6 kudos

Resolved! How to sort widgets in a specific order?

I'd like to have a couple of widgets, one for the start and another for end date. I want them to appear in that order but when I run the code below, end date shows up before the start date. How can order in the way I I desired?dbutils.widgets.text("s...

  • 6930 Views
  • 5 replies
  • 6 kudos
Latest Reply
laus
New Contributor III
  • 6 kudos

@Ravirahul Padmanabhan​  and @Werner Stinckens​ , for me going into edit mode as suggested by Ravi worked like a charm! Thank you both!

  • 6 kudos
4 More Replies
Jeff1
by Contributor II
  • 9254 Views
  • 3 replies
  • 4 kudos

Resolved! How to convert lat/long to geohash in databricks using geohashTools R library

I continues to receive a parsing error when attempting to convert lat/long data to a geohash in data bricks . I've tried two coding methods in R and get the same error.library(geohashTools)Method #1my_tbl$geo_hash <- gh_encode(my_tbl$Latitude, my_tbl...

  • 9254 Views
  • 3 replies
  • 4 kudos
Latest Reply
Jeff1
Contributor II
  • 4 kudos

The problem was I was trying to run the gh_encode function on a Spark dataframe. I needed to collect the date into a R dataframe then run the function.

  • 4 kudos
2 More Replies
manasa
by Contributor
  • 11920 Views
  • 3 replies
  • 7 kudos

Resolved! How to set retention period for a delta table lower than the default period? Is it even possible?

I am trying to set retention period for a delta by using following commands.deltaTable = DeltaTable.forPath(spark,delta_path)spark.conf.set("spark.databricks.delta.retentionDurationCheck.enabled", "false")deltaTable.logRetentionDuration = "interval 1...

  • 11920 Views
  • 3 replies
  • 7 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 7 kudos

There are two ways:1) Please set in cluster (Clusters -> edit -> Spark -> Spark config):spark.databricks.delta.retentionDurationCheck.enabled false 2) or just before DeltaTable.forPath set (I think you need to change order in your code):spark.conf.se...

  • 7 kudos
2 More Replies
AmanSehgal
by Honored Contributor III
  • 4305 Views
  • 5 replies
  • 12 kudos

Resolved! Query delta tables using databricks cluster in near real time.

I'm trying to query delta tables using JDBC connector in a Ruby app. I've noticed that it takes around 8 seconds just to connect with databricks cluster and then additional time to run the query.The app is connected to a web portal where users genera...

  • 4305 Views
  • 5 replies
  • 12 kudos
Latest Reply
User16763506477
Contributor III
  • 12 kudos

Hi @Aman Sehgal​ Could you please check SQL endpoints? SQL endpoint uses a photon engine. It can reduce the query processing time. And Serverless SQL endpoint can accelerate the launch timemore info: https://docs.databricks.com/sql/admin/sql-endpoin...

  • 12 kudos
4 More Replies
zayeem
by New Contributor
  • 2330 Views
  • 1 replies
  • 3 kudos

Resolved! Databricks - Jobs Last run date

Is there a way to get the last run date of job(s) ? I am trying to compile a report and trying to see if this output exists either in databricks jobs cli output or via api?

  • 2330 Views
  • 1 replies
  • 3 kudos
Latest Reply
AmanSehgal
Honored Contributor III
  • 3 kudos

Sure. Using Databricks jobs API you can get this information.Use the following API endpoint to get list of all the jobs and their executions till date in descending order.You can pass job_id as parameter to get runs of a specific job.https://<databri...

  • 3 kudos
Anonymous
by Not applicable
  • 906 Views
  • 0 replies
  • 3 kudos

March Madness + Data  Here at Databricks we like to use (you guessed it) data in our daily lives. Today kicks off a series called Databrags &#xd83c;&#xdf89; ...

March Madness + Data Here at Databricks we like to use (you guessed it) data in our daily lives. Today kicks off a series called Databrags Databrags are glimpses into how Bricksters and community folks like you use data to solve everyday problems, e...

  • 906 Views
  • 0 replies
  • 3 kudos
Abel_Martinez
by Contributor
  • 1737 Views
  • 1 replies
  • 1 kudos

Resolved! Create data bricks service account

Hi all, I need to create service account users who can only query some delta tables. I guess I do that by creating the user and granting select right to the desired tables. But Data bricks requests a mail account for these users. Is there a way to cr...

  • 1737 Views
  • 1 replies
  • 1 kudos
Latest Reply
Abel_Martinez
Contributor
  • 1 kudos

HI @Kaniz Fatma​ , I've checked the link but the standard method requires a mailbox and the user creation using SCIM API looks too complicated. I solved the issue, I created a mailbox for the service account and I created the user using that mailbox....

  • 1 kudos
GabrieleMuciacc
by New Contributor III
  • 4027 Views
  • 4 replies
  • 2 kudos

Resolved! Support for kwargs parameter in `/2.1/jobs/create` endpoint for `python_wheel_task`

If I create a job from the web UI and I select Python wheel, I can add kwargs parameters. Judging from the generated JSON job description, they appear under a section named `namedParameters`.However, if I use the REST APIs to create a job, it appears...

  • 4027 Views
  • 4 replies
  • 2 kudos
Latest Reply
rajeev_thakur_c
Databricks Employee
  • 2 kudos

Hi there, the documentation is not up-to-date, you should be able to create such a task with namedParameters instead of parameters. Here is a small example:{ "name": "test_entry_point", "tasks": [  {   "task_key": "test_entry_point",   "description":...

  • 2 kudos
3 More Replies
Maverick1
by Valued Contributor II
  • 3873 Views
  • 5 replies
  • 6 kudos

How to infer the online feature store table via an mlflow registered model, which is deployed to a sagemaker endpoint?

Can an mlflow registered model automatically infer the online feature store table, if that model is trained and logged via a databricks feature store table and the table is pushed to an online feature store (like AWS RDS)?

  • 3873 Views
  • 5 replies
  • 6 kudos
Latest Reply
Atanu
Databricks Employee
  • 6 kudos

@Saurabh Verma​  let us know if you need further help on this! Thanks.

  • 6 kudos
4 More Replies
lecardozo
by New Contributor II
  • 4458 Views
  • 5 replies
  • 1 kudos

Resolved! Problems with HiveMetastoreClient and internal Databricks Metastore.

I've been trying to use ​the HiveMetastoreClient class in Scala to extract some metadata from Databricks internal Metastore, without success. I'm currently using the 7.3 LTS runtime.​The error seems to be related to some kind of inconsistency between...

  • 4458 Views
  • 5 replies
  • 1 kudos
Latest Reply
lecardozo
New Contributor II
  • 1 kudos

Thanks for the reference, @Atanu Sarkar​ .​Seems a little odd to me that I'd need to change the internal Databricks Metastore table to add a column expected by the client default Scala client. I'm afraid this could cause issues with other users/jobs ...

  • 1 kudos
4 More Replies
irfanaziz
by Contributor II
  • 5577 Views
  • 4 replies
  • 0 kudos

Resolved! If two Data Factory pipelines are run at the same time or share a window of execution do they share the Databricks spark cluster(if both have the same linked service)? ( job clusters are those that are create on the go, defined in the linked service).

Continuing the above case, does that mean if i have several like 5 ADF pipelines scheduled regularly at the same time, its better to use an existing cluster as all of the ADF pipelines would share the same cluster and hence the cost will be lower?

  • 5577 Views
  • 4 replies
  • 0 kudos
Latest Reply
Atanu
Databricks Employee
  • 0 kudos

for adf or job run we always prefer job cluster. but for streaming, you may consider using interactive cluster . but anyway you need to monitor the cluster load, if loads are high there will be chance to job slowness as well as failure. also data siz...

  • 0 kudos
3 More Replies
gibbona1
by New Contributor II
  • 3841 Views
  • 2 replies
  • 1 kudos

Resolved! Correct setup and format for calling REST API for image classification

I trained a basic image classification model on MNIST using Tensorflow, logging the experiment run with MLflow.Model: "my_sequential" _________________________________________________________________ Layer (type) Output Shape ...

mnist_model_error
  • 3841 Views
  • 2 replies
  • 1 kudos
Latest Reply
Atanu
Databricks Employee
  • 1 kudos

@Anthony Gibbons​  may be this git should work with your use case - https://github.com/mlflow/mlflow/issues/1661

  • 1 kudos
1 More Replies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group
Labels