cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Merchiv
by New Contributor III
  • 7577 Views
  • 4 replies
  • 3 kudos

Resolved! How can I add a duration in milliseconds to a timestamp?

Let's say I have a DataFrame with a timestamp and an offset column in milliseconds respectively in the timestamp and long format. E.g.from datetime import datetime df = spark.createDataFrame( [ (datetime(2021, 1, 1), 1500, ), (dat...

  • 7577 Views
  • 4 replies
  • 3 kudos
Latest Reply
Merchiv
New Contributor III
  • 3 kudos

Although @Lakshay Goel​'s solution works, we've been using an alternative approach, that we found to be a bit more readable:from pyspark.sql import Column, functions as f     def make_dt_interval_sec(col: Column): return f.expr(f"make_dt_interval...

  • 3 kudos
3 More Replies
User16869510359
by Esteemed Contributor
  • 7908 Views
  • 2 replies
  • 0 kudos
  • 7908 Views
  • 2 replies
  • 0 kudos
Latest Reply
Mooune_DBU
Valued Contributor
  • 0 kudos

It's set as an environment variable called `DATABRICKS_RUNTIME_VERSION`In your init scripts, you just need to add a line to display or save the info (see python example below):import os print("DATABRICKS_RUNTIME_VERSION:",os.environ.get('DATABRICKS_R...

  • 0 kudos
1 More Replies
bchaubey
by Contributor II
  • 930 Views
  • 2 replies
  • 0 kudos

voucher

Did you receive your voucher?

  • 930 Views
  • 2 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @Kashish Khetarpaul​ Thank you for reaching out! Please submit a ticket to our Training Team here: https://help.databricks.com/s/contact-us?ReqType=training  and our team will get back to you shortly. 

  • 0 kudos
1 More Replies
Databrick_begin
by New Contributor
  • 1228 Views
  • 1 replies
  • 0 kudos

Databrick notebook to Azure SQL server connection using private ip because Public access is Denied in Azure SQL database, and Databrick and Azure SQL both in same subscription but different Virtual Network.

We have created private endpoint for Azure SQL database which has private ip. and by making host file entry in my system i am able to resolve Ip for Azure sql server from my system and connect to Server. but unable to connect from Azure Databrick not...

  • 1228 Views
  • 1 replies
  • 0 kudos
Latest Reply
Ryoma
New Contributor II
  • 0 kudos

If vnet injection is not used, the connection could be established by setting up an init script with azure private resolver as nameserver.​#!/bin/bashmv /etc/resolv.conf /etc/resolv.conf.origecho nameserver <your dns server ip> | sudo tee --append /e...

  • 0 kudos
THIAM_HUATTAN
by Valued Contributor
  • 7533 Views
  • 6 replies
  • 7 kudos

Is catalog a feature in the community version?

%sql create catalog if not exists catalog1I tried above, but it gives me error as below:com.databricks.backend.common.rpc.DatabricksExceptions$SQLExecutionException: org.apache.spark.sql.AnalysisException: Catalog namespace is not supported. at com.d...

  • 7533 Views
  • 6 replies
  • 7 kudos
Latest Reply
Kaniz
Community Manager
  • 7 kudos

Hi @THIAM HUAT TAN​​, It would mean a lot if you could select the "Best Answer" to help others find the correct answer faster.This makes that answer appear right after the question, so it's easier to find within a thread.It also helps us mark the que...

  • 7 kudos
5 More Replies
Dataengineer_mm
by New Contributor
  • 765 Views
  • 2 replies
  • 0 kudos

Databricks workflow migration to higher environments

How do we migrate the databricks workflows to higher environment ? I do see an option for calling the tasks (notebooks,python) from the github repositories. But as such how do we migrate the entire workflow jobs to other environment ?

  • 765 Views
  • 2 replies
  • 0 kudos
Latest Reply
jose_gonzalez
Moderator
  • 0 kudos

Hi @Menaka Murugesan​,Just a friendly follow-up. Did any of the responses help you to resolve your question? if it did, please mark it as best. Otherwise, please let us know if you still need help.

  • 0 kudos
1 More Replies
rbricks
by New Contributor
  • 684 Views
  • 2 replies
  • 0 kudos

numSourceRows greater than expected

HeyI am doing an upsert of a source DataFrame into a target table. Before said upsert, I print out the source DataFrame's row count, which is a bit smaller than what `numSourceRows` says after the operation completes and I check the operationMetrics....

  • 684 Views
  • 2 replies
  • 0 kudos
Latest Reply
jose_gonzalez
Moderator
  • 0 kudos

could you share your code snippet please? also share the expected output.

  • 0 kudos
1 More Replies
771407
by New Contributor II
  • 878 Views
  • 3 replies
  • 3 kudos

Resolved! R code that works perfectly on Rstudio does not run here

Hi,I have a "simple" R script that I need to import into Databricks and am running into errors.For example:TipoB <- Techtb %>% dplyr::filter(grepl('being evaluated', Comentarios))#TipoB$yearsSpec <- NATipoB$yearsSpec <- str_replace(TipoB$Comentarios,...

  • 878 Views
  • 3 replies
  • 3 kudos
Latest Reply
771407
New Contributor II
  • 3 kudos

R studio version 2022.12.0.R latest version available on 08/FEB/2023. I don't know where to find the DBR version and configuration. Can you direct me?

  • 3 kudos
2 More Replies
MikeJohnsonZa
by New Contributor
  • 1187 Views
  • 3 replies
  • 0 kudos

Resolved! Importing irregularly formatted json files

HiI'm importing a large collection of json files, the problem is that they are not what I would expect a well-formatted json file to be (although probably still valid), each file consists of only a single record that looks something like this (this i...

  • 1187 Views
  • 3 replies
  • 0 kudos
Latest Reply
jose_gonzalez
Moderator
  • 0 kudos

Hi @Michael Johnson​,I would like to share the following notebook which contains examples on how to process complex data types, like JSON. Please check the following link and let us know if you still need help https://docs.databricks.com/optimization...

  • 0 kudos
2 More Replies
youssefmrini
by Honored Contributor III
  • 976 Views
  • 1 replies
  • 4 kudos
  • 976 Views
  • 1 replies
  • 4 kudos
Latest Reply
youssefmrini
Honored Contributor III
  • 4 kudos

You can now use cluster policies to restrict the number of clusters a user can create. For more information https://docs.databricks.com/administration-guide/clusters/policies.html#cluster-limit

  • 4 kudos
youssefmrini
by Honored Contributor III
  • 1115 Views
  • 1 replies
  • 2 kudos
  • 1115 Views
  • 1 replies
  • 2 kudos
Latest Reply
youssefmrini
Honored Contributor III
  • 2 kudos

Clone can now be used to create and incrementally update Delta tables that mirror Apache Parquet and Apache Iceberg tables. You can update your source Parquet table and incrementally apply the changes to their cloned Delta table with the clone comman...

  • 2 kudos
youssefmrini
by Honored Contributor III
  • 480 Views
  • 1 replies
  • 2 kudos
  • 480 Views
  • 1 replies
  • 2 kudos
Latest Reply
youssefmrini
Honored Contributor III
  • 2 kudos

You can now use OAuth to authenticate to Power BI and Tableau. For more information, see Configure OAuth (Public Preview) for Power BI and Configure OAuth (Public Preview) for Tableau.https://docs.databricks.com/integrations/configure-oauth-powerbi.h...

  • 2 kudos
156190
by New Contributor III
  • 1799 Views
  • 6 replies
  • 3 kudos

Resolved! Is 'run_as' user available from jobs api 2.1?

I know that the run_as user generally defaults to the creator_user, but I would like to find the defined run_as user for each of our jobs. Unfortunately, I'm unable to locate that field in the api.

  • 1799 Views
  • 6 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

Hi @Keller, Michael​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Th...

  • 3 kudos
5 More Replies
SagarK1
by New Contributor
  • 2114 Views
  • 5 replies
  • 2 kudos

Managing the permissions using MLFlow APIs

Hello All,I am trying to manage the permissions on the experiments using the MLFLow API. Do we have any MLFlow API which helps to manage the permissions of Can Read ,Can Edit , Can Manage.Example :I create the model using MLFlow APIs and through my c...

  • 2114 Views
  • 5 replies
  • 2 kudos
Latest Reply
jsan
New Contributor II
  • 2 kudos

Hey folks, did we get any workaround for this or what @Sean Owen​ said is true ?

  • 2 kudos
4 More Replies
zeta_load
by New Contributor II
  • 929 Views
  • 1 replies
  • 1 kudos

Resolved! Is it possible to restart a cluster from a Notebook without using the UI

I have some code that occasionally wrong executed, meaning that every n-th time a calculation in a table is wrong. If that happens, I want to be able to restart the cluster from the Notebook.- I'm therefore lookong for a piece of code that can accomp...

  • 929 Views
  • 1 replies
  • 1 kudos
Latest Reply
daniel_sahal
Esteemed Contributor
  • 1 kudos

@Lukas Goldschmied​ It is. You'll need to use Databricks API.Here you can find an example:https://learn.microsoft.com/en-us/azure/databricks/_extras/notebooks/source/clusters-long-running-optional-restart.html

  • 1 kudos
Labels
Top Kudoed Authors