cancel
Showing results for 
Search instead for 
Did you mean: 
Community Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

egndz
by New Contributor II
  • 637 Views
  • 1 replies
  • 0 kudos

Unable to login to Azure Databricks

I am trying to login to Azure Databricks as usual but it waits for 5 10 seconds then redirects me to https://westeurope-c2.azuredatabricks.net/aad/redirect The system is up and all of my colleagues can login to the system. I have tried in incognito m...

  • 637 Views
  • 1 replies
  • 0 kudos
Latest Reply
Simranarora
New Contributor III
  • 0 kudos

Hi @egndz , If it was a one-time issue we suspect it was an issue within Azure or Databricks services. Please follow the status page to keep track of the maintenance/outages. https://status.azuredatabricks.net/

  • 0 kudos
Anto23
by New Contributor
  • 495 Views
  • 1 replies
  • 0 kudos

Failed Synchronization of Files Using Databricks Extension in VS Code

hi,I am trying to set up Databricks Extension in VS Code. I follow the steps as per the guide belowhttps://docs.databricks.com/en/dev-tools/vscode-ext/tutorial.html When i move to step 6 (see in the above guide) i follow the steps and i create succes...

db.png
  • 495 Views
  • 1 replies
  • 0 kudos
Latest Reply
Simranarora
New Contributor III
  • 0 kudos

Hi @Anto23 ,   Greetings from Databricks. Based on the above information, it seems Files in Workspace is currently disabled for your Databricks environment. This feature allows storing and accessing non-notebook files alongside your notebooks and cod...

  • 0 kudos
sai_sathya
by New Contributor III
  • 793 Views
  • 1 replies
  • 0 kudos

Decimal Precision error

when i try to create an dataframe like this   lstOfRange = list() lstOfRange = [ ['CREDIT_LIMIT_RANGE',Decimal(10000000.010000),Decimal(100000000000000000000000.000000),'>10,000,000','G'] ] RangeSchema = StructType([StructField("rangeType",St...

  • 793 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @sai_sathya, The issue you’re encountering with the value in the rangeTo column of your DataFrame is related to the precision of floating-point numbers. Let’s break down what’s happening: Floating-Point Precision: Computers represent floating...

  • 0 kudos
Pravin08
by New Contributor III
  • 728 Views
  • 2 replies
  • 0 kudos

Oracle table load from Databricks

I am trying to load a dataframe from Databricks to target Oracle table using the write method and using JDBC api. I have the right drivers. The job and its corresponding stages are getting completed and the data is getting loaded in Oracle target tab...

Community Discussions
Databricks - Oracle load
  • 728 Views
  • 2 replies
  • 0 kudos
Latest Reply
Pravin08
New Contributor III
  • 0 kudos

Thanks for the response. Can you please elaborate on the Apache Spark JDBC Connector. I am using ojdbc8 driver as per the Databricks documentation. I am not using Delta Lake. I have the data in a dataframe and using write method to insert the data to...

  • 0 kudos
1 More Replies
thethirtyfour
by New Contributor III
  • 1759 Views
  • 2 replies
  • 0 kudos

error installing igraph library

Hi, I am trying to install the "igraph" and "networkD3" CRAN packages for use within a notebook, but am receiving the below error.Could someone please assist?Thanks! * installing *source* package ‘igraph’ ... ** package ‘igraph’ successfully unpacked...

  • 1759 Views
  • 2 replies
  • 0 kudos
Latest Reply
pcs
New Contributor II
  • 0 kudos

Based on this igraph github issue https://github.com/igraph/rigraph/issues/490#issuecomment-966890059, I followed the instructions to install glpk. After installing glpk, I was able to install igraph.

  • 0 kudos
1 More Replies
marcuskw
by Contributor
  • 1892 Views
  • 9 replies
  • 2 kudos

Create a Workflow Schedule with varying Parameters

We aim to reduce the amount of notebooks we create to a minimum and instead make these fairly flexible. Therefore we have a Factory setup that takes in a parameter to varies the logic.However when it comes to Workflows we are forced to create multipl...

  • 1892 Views
  • 9 replies
  • 2 kudos
Latest Reply
AlexVB
New Contributor III
  • 2 kudos

Did you figure out if this was possible?I too find it that we have too many workflows and I would rather have them combined, but have different parts or the workflow run on different schedules.

  • 2 kudos
8 More Replies
thethirtyfour
by New Contributor III
  • 1049 Views
  • 2 replies
  • 1 kudos

Resolved! Error installing the igraph and networkD3 R libraries

Hi,I am trying to install the igraph and networkD3 CRAN packages for use within a notebook. However, I am receiving the attached installation error when attempting to do so.Could someone please assist?Thank you!

  • 1049 Views
  • 2 replies
  • 1 kudos
Latest Reply
thethirtyfour
New Contributor III
  • 1 kudos

Thank you!

  • 1 kudos
1 More Replies
markwilliam8506
by New Contributor
  • 433 Views
  • 1 replies
  • 0 kudos

What causes affect QB Won't Open and how to fix it?

What Can Be Causing QB Won't Open Issue and How Can I Fix It? I need help immediately to fix this annoying issue! Has anybody else had such problems with QB refusing to open? My personal attempts at troubleshooting have yielded no results. I would be...

  • 433 Views
  • 1 replies
  • 0 kudos
Latest Reply
kartanjohn29
New Contributor II
  • 0 kudos

@markwilliam8506 If your QB won't open even after multiple tries, you might be facing some common error messages. This scenario can be a result of damaged program files or a faulty installation process, among other possible reasons. The error message...

  • 0 kudos
kfab
by New Contributor II
  • 2091 Views
  • 2 replies
  • 0 kudos

Serving GPU Endpoint, can't find CUDA

Hi everyone !I'm encountering an issue while trying to serve my model on a GPU endpoint.My model is using deespeed that needs I got the following error : "An error occurred while loading the model. CUDA_HOME does not exist, unable to compile CUDA op(...

  • 2091 Views
  • 2 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @kfab,  It seems you’re encountering an issue related to CUDA while serving your model on a GPU endpoint. Let’s troubleshoot this step by step. CUDA_HOME Not Found: The error message you received, “CUDA_HOME does not exist, unable to compile C...

  • 0 kudos
1 More Replies
sanjay
by Valued Contributor II
  • 983 Views
  • 1 replies
  • 0 kudos

Deploy mlflow model to Sagemaker

Hi,I am trying to deploy mlflow model in Sagemaker. My mlflow model is registered in Databrick.Followed below url to deploy and it need ECR for deployment. For ECR, either I can create custom image and push to ECR or its mentioned in below url to get...

  • 983 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

 Hi @sanjay, Deploying an MLflow model to Amazon SageMaker is a great way to scale your machine learning inference containers. MLflow simplifies the deployment process by providing easy-to-use commands without requiring you to write complex container...

  • 0 kudos
Sanky
by New Contributor
  • 691 Views
  • 1 replies
  • 0 kudos

SQL query on information_schema.tables via service principal

Hi,I have a simple python notebook with below code ----query = "select table_catalog, table_schema, table_name from system.information_schema.tables where table_type!='VIEW' and table_catalog='TEST' and table_schema='TEST'"test = spark.sql(query)disp...

Community Discussions
information_schema
Service Principal
troubleshooting
  • 691 Views
  • 1 replies
  • 0 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 0 kudos

Hi @Sanky, It seems you’re encountering an issue where your Spark job, running as a service principal, doesn’t return any results when querying the same code that works in your workspace. Let’s troubleshoot this: Service Principal Permissions: Y...

  • 0 kudos
rendorHaevyn
by New Contributor III
  • 874 Views
  • 1 replies
  • 1 kudos

Resolved! Notebook Editor Theme Not Being Retained after Repo Screen

tldr: Notebook selected "Editor theme (New)" is not being retained after viewing "push code code to repo" screen.I believe I have the answer to this issue.What's occurring and why:1. User selects: View --> Editor theme --> <<theme>> (ie: Monokai)2. U...

  • 874 Views
  • 1 replies
  • 1 kudos
Latest Reply
Kaniz_Fatma
Community Manager
  • 1 kudos

Hi @rendorHaevyn, Thank you for sharing your observation regarding the Editor theme behaviour in Databricks Notebooks. 

  • 1 kudos
arkiboys
by Contributor
  • 978 Views
  • 2 replies
  • 2 kudos

Resolved! reading databricks tables

Hello,Currently I have created databricks tables in the hive_metastore.databasesTo read these tables using a select * query inside the databricks notebook, I have to make sure the databrcks cluster is started.Question is to do with reading the databr...

  • 978 Views
  • 2 replies
  • 2 kudos
Latest Reply
arkiboys
Contributor
  • 2 kudos

thank you

  • 2 kudos
1 More Replies
sue01
by New Contributor II
  • 1078 Views
  • 2 replies
  • 0 kudos

Error with using Vector assembler in Unity Catalog

Hello,I am getting the below error while trying to convert my features using vector assembler in unity catalog cluster I tried setting up the config like mentioned in a different post, but it did not work still. Could use some help here.Thank you..

sue01_0-1708093249414.png
Community Discussions
unitycatalog mlflowerror
  • 1078 Views
  • 2 replies
  • 0 kudos
Latest Reply
sue01
New Contributor II
  • 0 kudos

Yes, so what is the solution for shared access mode? How do I solve this error?

  • 0 kudos
1 More Replies
lwoodward
by New Contributor II
  • 801 Views
  • 1 replies
  • 0 kudos

Resolved! ETL Advice for Large Transactional Database

I have a SQL server transactional database on an EC2 instance, and an AWS Glue job that pulls full tables in parquet files into an S3 bucket. There is a very large table that has 44 million rows, and records are added, updated and deleted from this t...

Community Discussions
autoloader
ETL
  • 801 Views
  • 1 replies
  • 0 kudos
Latest Reply
ScottSmithDB
Valued Contributor
  • 0 kudos

If you have a CDC stream capability, you can use the APPLY CHANGES INTO API to perform SCD1, or SCD2 in a Delta Lake table in Databricks.  You can find more information here.  This is the best way to go if CDC is a possibility.If you do not have a CD...

  • 0 kudos
Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!