cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

u2dragon
by New Contributor III
  • 16774 Views
  • 5 replies
  • 0 kudos

Resolved! Can't install python library

I'm trying to install a python library but I'm not able, the status won't change from "pending". I get this message when I click on the library under the cluster's Libraries tab: "Library installation has been attempted on the driver node but has not...

  • 16774 Views
  • 5 replies
  • 0 kudos
Latest Reply
u2dragon
New Contributor III
  • 0 kudos

Ok, looks like I was able to solve my problem.First, I needed to install all the required libraries one by one. These are the followings:pandassixrequestspyspnegocryptographykrb5requests-kerberosAfter that I was able to install the webAPI library.

  • 0 kudos
4 More Replies
Merchiv
by New Contributor III
  • 17356 Views
  • 4 replies
  • 3 kudos

Resolved! How can I add a duration in milliseconds to a timestamp?

Let's say I have a DataFrame with a timestamp and an offset column in milliseconds respectively in the timestamp and long format. E.g.from datetime import datetime df = spark.createDataFrame( [ (datetime(2021, 1, 1), 1500, ), (dat...

  • 17356 Views
  • 4 replies
  • 3 kudos
Latest Reply
Merchiv
New Contributor III
  • 3 kudos

Although @Lakshay Goel​'s solution works, we've been using an alternative approach, that we found to be a bit more readable:from pyspark.sql import Column, functions as f     def make_dt_interval_sec(col: Column): return f.expr(f"make_dt_interval...

  • 3 kudos
3 More Replies
bchaubey
by Contributor II
  • 2549 Views
  • 2 replies
  • 0 kudos

voucher

Did you receive your voucher?

  • 2549 Views
  • 2 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @Kashish Khetarpaul​ Thank you for reaching out! Please submit a ticket to our Training Team here: https://help.databricks.com/s/contact-us?ReqType=training  and our team will get back to you shortly. 

  • 0 kudos
1 More Replies
Databrick_begin
by New Contributor
  • 2532 Views
  • 1 replies
  • 0 kudos

Databrick notebook to Azure SQL server connection using private ip because Public access is Denied in Azure SQL database, and Databrick and Azure SQL both in same subscription but different Virtual Network.

We have created private endpoint for Azure SQL database which has private ip. and by making host file entry in my system i am able to resolve Ip for Azure sql server from my system and connect to Server. but unable to connect from Azure Databrick not...

  • 2532 Views
  • 1 replies
  • 0 kudos
Latest Reply
Ryoma
New Contributor II
  • 0 kudos

If vnet injection is not used, the connection could be established by setting up an init script with azure private resolver as nameserver.​#!/bin/bashmv /etc/resolv.conf /etc/resolv.conf.origecho nameserver <your dns server ip> | sudo tee --append /e...

  • 0 kudos
THIAM_HUATTAN
by Valued Contributor
  • 12127 Views
  • 5 replies
  • 6 kudos

Is catalog a feature in the community version?

%sql create catalog if not exists catalog1I tried above, but it gives me error as below:com.databricks.backend.common.rpc.DatabricksExceptions$SQLExecutionException: org.apache.spark.sql.AnalysisException: Catalog namespace is not supported. at com.d...

  • 12127 Views
  • 5 replies
  • 6 kudos
Latest Reply
BradSheridan
Valued Contributor
  • 6 kudos

For me, selecting Runtime 11.1 did not work (i.e. 'unity catalog' didn't show up on the right-hand side under Summary). But when I selected Runtime 11.2, it popped up. Going to start playing with it now

  • 6 kudos
4 More Replies
Dataengineer_mm
by New Contributor
  • 2188 Views
  • 2 replies
  • 0 kudos

Databricks workflow migration to higher environments

How do we migrate the databricks workflows to higher environment ? I do see an option for calling the tasks (notebooks,python) from the github repositories. But as such how do we migrate the entire workflow jobs to other environment ?

  • 2188 Views
  • 2 replies
  • 0 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 0 kudos

Hi @Menaka Murugesan​,Just a friendly follow-up. Did any of the responses help you to resolve your question? if it did, please mark it as best. Otherwise, please let us know if you still need help.

  • 0 kudos
1 More Replies
rbricks
by New Contributor
  • 1484 Views
  • 2 replies
  • 0 kudos

numSourceRows greater than expected

HeyI am doing an upsert of a source DataFrame into a target table. Before said upsert, I print out the source DataFrame's row count, which is a bit smaller than what `numSourceRows` says after the operation completes and I check the operationMetrics....

  • 1484 Views
  • 2 replies
  • 0 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 0 kudos

could you share your code snippet please? also share the expected output.

  • 0 kudos
1 More Replies
771407
by New Contributor II
  • 2570 Views
  • 3 replies
  • 3 kudos

Resolved! R code that works perfectly on Rstudio does not run here

Hi,I have a "simple" R script that I need to import into Databricks and am running into errors.For example:TipoB <- Techtb %>% dplyr::filter(grepl('being evaluated', Comentarios))#TipoB$yearsSpec <- NATipoB$yearsSpec <- str_replace(TipoB$Comentarios,...

  • 2570 Views
  • 3 replies
  • 3 kudos
Latest Reply
771407
New Contributor II
  • 3 kudos

R studio version 2022.12.0.R latest version available on 08/FEB/2023. I don't know where to find the DBR version and configuration. Can you direct me?

  • 3 kudos
2 More Replies
MikeJohnsonZa
by New Contributor
  • 2987 Views
  • 3 replies
  • 0 kudos

Resolved! Importing irregularly formatted json files

HiI'm importing a large collection of json files, the problem is that they are not what I would expect a well-formatted json file to be (although probably still valid), each file consists of only a single record that looks something like this (this i...

  • 2987 Views
  • 3 replies
  • 0 kudos
Latest Reply
jose_gonzalez
Databricks Employee
  • 0 kudos

Hi @Michael Johnson​,I would like to share the following notebook which contains examples on how to process complex data types, like JSON. Please check the following link and let us know if you still need help https://docs.databricks.com/optimization...

  • 0 kudos
2 More Replies
youssefmrini
by Databricks Employee
  • 2512 Views
  • 1 replies
  • 4 kudos
  • 2512 Views
  • 1 replies
  • 4 kudos
Latest Reply
youssefmrini
Databricks Employee
  • 4 kudos

You can now use cluster policies to restrict the number of clusters a user can create. For more information https://docs.databricks.com/administration-guide/clusters/policies.html#cluster-limit

  • 4 kudos
youssefmrini
by Databricks Employee
  • 2486 Views
  • 1 replies
  • 2 kudos
  • 2486 Views
  • 1 replies
  • 2 kudos
Latest Reply
youssefmrini
Databricks Employee
  • 2 kudos

Clone can now be used to create and incrementally update Delta tables that mirror Apache Parquet and Apache Iceberg tables. You can update your source Parquet table and incrementally apply the changes to their cloned Delta table with the clone comman...

  • 2 kudos
youssefmrini
by Databricks Employee
  • 1366 Views
  • 1 replies
  • 2 kudos
  • 1366 Views
  • 1 replies
  • 2 kudos
Latest Reply
youssefmrini
Databricks Employee
  • 2 kudos

You can now use OAuth to authenticate to Power BI and Tableau. For more information, see Configure OAuth (Public Preview) for Power BI and Configure OAuth (Public Preview) for Tableau.https://docs.databricks.com/integrations/configure-oauth-powerbi.h...

  • 2 kudos
156190
by New Contributor III
  • 4381 Views
  • 4 replies
  • 3 kudos

Resolved! Is 'run_as' user available from jobs api 2.1?

I know that the run_as user generally defaults to the creator_user, but I would like to find the defined run_as user for each of our jobs. Unfortunately, I'm unable to locate that field in the api.

  • 4381 Views
  • 4 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

Hi @Keller, Michael​ Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. We'd love to hear from you.Th...

  • 3 kudos
3 More Replies
SagarK1
by New Contributor
  • 5151 Views
  • 4 replies
  • 2 kudos

Managing the permissions using MLFlow APIs

Hello All,I am trying to manage the permissions on the experiments using the MLFLow API. Do we have any MLFlow API which helps to manage the permissions of Can Read ,Can Edit , Can Manage.Example :I create the model using MLFlow APIs and through my c...

  • 5151 Views
  • 4 replies
  • 2 kudos
Latest Reply
jsan
New Contributor II
  • 2 kudos

Hey folks, did we get any workaround for this or what @Sean Owen​ said is true ?

  • 2 kudos
3 More Replies
zeta_load
by New Contributor II
  • 24293 Views
  • 1 replies
  • 1 kudos

Resolved! Is it possible to restart a cluster from a Notebook without using the UI

I have some code that occasionally wrong executed, meaning that every n-th time a calculation in a table is wrong. If that happens, I want to be able to restart the cluster from the Notebook.- I'm therefore lookong for a piece of code that can accomp...

  • 24293 Views
  • 1 replies
  • 1 kudos
Latest Reply
daniel_sahal
Esteemed Contributor
  • 1 kudos

@Lukas Goldschmied​ It is. You'll need to use Databricks API.Here you can find an example:https://learn.microsoft.com/en-us/azure/databricks/_extras/notebooks/source/clusters-long-running-optional-restart.html

  • 1 kudos

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now
Labels