- 1033 Views
- 2 replies
- 0 kudos
Restrict access of user/entity to hitting only specific Databricks Rest APIs
Hi community,Assume I generate a personal access token for an entity. Post generation, can I restrict the access of the entity to specific REST APIs? In other words, consider this example where once I use generate the token and setup a bearer token b...
- 1033 Views
- 2 replies
- 0 kudos
- 0 kudos
@Surajv You have to rely on access control settings on resources and entities (users or service principals or create some cluster policies), rather than directly restricting the API endpoints at the token level.Note: API access based on fine-grained ...
- 0 kudos
- 2865 Views
- 17 replies
- 3 kudos
Mounting Data IOException
Hello,I am currently taking a course from Coursera for data science using SQL. For one of our assignments we need to mount some data by running a script that has been provided to us by the class. When I run the script I receive the following error. I...
- 2865 Views
- 17 replies
- 3 kudos
- 3 kudos
Thanks for the responses EricMa and Tawfeeq. I was able to load both tables in and was able to complete Module 1 exercises.
- 3 kudos
- 88 Views
- 0 replies
- 0 kudos
Sharing Opportunities with Databricks
Hi everyone,I would like to talk to someone that could set up a process of deals sharing with Databricks following the GDPR.Thanks,Carol.
- 88 Views
- 0 replies
- 0 kudos
- 189 Views
- 3 replies
- 2 kudos
Resolved! Differences among python libraries
I am confused as to the differences between various python libraries for databricks: especially with regard to differences among [databricks-connect](https://pypi.org/project/databricks-connect/), [databricks-api](https://pypi.org/project/databricks-...
- 189 Views
- 3 replies
- 2 kudos
- 2 kudos
@szymon_dybczak,Thank you for typing all that up. It is very clear and helpful.Two follow ups if I may:1. If one's primary goal is to execute SQL queries why prefer databricks sql connector over a generic jdbc or odbc package?2. Did I miss any other ...
- 2 kudos
- 4645 Views
- 3 replies
- 0 kudos
How list all USERs FROM a especific GROUP USING SQL?
I want to list, using sql editor, all users name from a specific group.Reading documentation, I only learned how to show the groups or the users, using simples filters, like:SHOW GROUPS LIKE '*XPTO*';SHOW GROUPS WITH USER `test@gmail.com`SHOW USERS L...
- 4645 Views
- 3 replies
- 0 kudos
- 0 kudos
I don't think it's possible yet. Unfortunately, I look in all system tables and command and didn't found this kind of things.But with a Python notebook, like what did the AI, you can reconstruct it:first you list all the users withSHOW USERSthen you ...
- 0 kudos
- 178 Views
- 2 replies
- 0 kudos
Resolved! Databricks Certified Associate Developer for Apache Spark 3 Got Suspended. Require Immediate support
Hello @Cert-Team @Certificate Team,Request Id# 00544837I encountered a pathetic experience while attempting my Databricks Certified Associate Developer for Apache Spark 3 certification exam. I had answered more than 44 questions I applied for Data...
- 178 Views
- 2 replies
- 0 kudos
- 0 kudos
The issue is resolved . Please close the ticket
- 0 kudos
- 715 Views
- 6 replies
- 0 kudos
Resolved! Unable to start browser for databricks certification
Hello, I have registered for databricks certified data engineering associate exam. One of the requirements to give the exam is The exam is set for Sunday 6th October, 2024 but the browser installation (psi secure bridge browser) does not work. .Reac...
- 715 Views
- 6 replies
- 0 kudos
- 0 kudos
Hey folks,The exam setup worked successfully. For future reference, on successful installation two browsers will be installed:1. PSI Secure Browser2. Lockdown BrowserAs long as lock down browser is available and when prompted says it will open only w...
- 0 kudos
- 995 Views
- 2 replies
- 1 kudos
Private Python Package in Serverless Job
I am trying to create a Databricks Job using Serverless Compute. I am using wheel file to run the Python Job.The wheel file has setup.py file using which all dependencies are installed. One of the package dependency is a private package hosted on Git...
- 995 Views
- 2 replies
- 1 kudos
- 1 kudos
- 1 kudos
- 174 Views
- 1 replies
- 0 kudos
Unable to clone delta live table
Hello Team,In order to avoid any cost of running workflow for historical data, i am trying DEEP clone in order to copy data of streaming table from PROD workspace to QA workspace but it is giving below error. Please suggest some efficient data copy m...
- 174 Views
- 1 replies
- 0 kudos
- 0 kudos
Hi @Anish_2 ,This is well known limitation of DLT. You can read about it at limitation section in documentation. Conclusion is following. If you want to have ability to use all feature of delta protocol, it's better not to use dlt framework. It's gre...
- 0 kudos
- 2930 Views
- 2 replies
- 0 kudos
File size upload limit through CLI
Does anyone know the size limit for uploading files through the CLI? I'm not finding it in the documentation.
- 2930 Views
- 2 replies
- 0 kudos
- 0 kudos
- 0 kudos
- 479 Views
- 2 replies
- 0 kudos
CLI is not helpful in exporting Error: expected to have the absolute path of the object or directory
I try to export a job as a DBA in order to create an Asset Bundle according to thishttps://community.databricks.com/t5/data-engineering/databricks-asset-bundle-dab-from-existing-workspace/td-p/49309I am on Windows 10 Pro x64 withDatabricks CLI v0.223...
- 479 Views
- 2 replies
- 0 kudos
- 0 kudos
To export an existing folder under /Workspace/... export-dir command could be used : databricks workspace export-dir /Workspace/Applications/ucx/logs/migrate-tables/run-123-0/ /Users/artem.sheiko/logs
- 0 kudos
- 170 Views
- 1 replies
- 0 kudos
Best practices for optimizing Spark jobs
What are some best practices for optimizing Spark jobs in Databricks, especially when dealing large datasets? Any tips or resources would be greatly appreciated! I’m trying to analyze data on restaurant menu prices so that insights would be especiall...
- 170 Views
- 1 replies
- 0 kudos
- 0 kudos
There are so many.Here are a few:- look for data skew- shuffle as less as possible- avoid many small files- use spark and not only pure python- if using an autoscale cluster: check if you don't lose a lot of time scaling up/down
- 0 kudos
- 285 Views
- 1 replies
- 0 kudos
Databricks serverless Vs snowflake
Hi All,We want to switch from Snowflake to Databricks SQL Warehouse/serverless to simplify our data layers and reduce data copies before the reporting layer. Please share the benefits of using serverless over Snowflake and any limitations you see. We...
- 285 Views
- 1 replies
- 0 kudos
- 0 kudos
one big pro is that you do not need to copy data to the dwh. also your transformations and analytics queries reside on the same platform (databricks).If databricks can cover all the requirementsm compared to snowflakem is hard to tell. Probably ther...
- 0 kudos
- 6260 Views
- 6 replies
- 3 kudos
403 FORBIDDEN You don't have permission to access this page 2023-07-02 08:03:23 | Error 403 | https:
I am getting above message when trying to access the quiz for Generative AI Fundamentals.
- 6260 Views
- 6 replies
- 3 kudos
- 3 kudos
I am getting above message when trying to access the quiz for Lakehouse Fundamentals.
- 3 kudos
- 192 Views
- 0 replies
- 0 kudos
Azure Databricks to GCP Databricks Migration
Hi Team, Can you provide your thoughts on moving Databricks from Azure to GCP? What services are required for the migration, and are there any limitations on GCP compared to Azure? Also, are there any tools that can assist with the migration? Please ...
- 192 Views
- 0 replies
- 0 kudos
Connect with Databricks Users in Your Area
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group-
12.2 LST
1 -
Access Data
2 -
Access Delta Tables
1 -
Account reset
1 -
ADF Pipeline
1 -
ADLS Gen2 With ABFSS
1 -
Analytics
1 -
Apache spark
1 -
API
1 -
API Documentation
1 -
Architecture
1 -
Auto-loader
1 -
Autoloader
2 -
AWS
2 -
AWS security token
1 -
AWSDatabricksCluster
1 -
Azure
2 -
Azure data disk
1 -
Azure databricks
10 -
Azure Databricks SQL
4 -
Azure databricks workspace
1 -
Azure Unity Catalog
4 -
Azure-databricks
1 -
AzureDatabricks
1 -
AzureDevopsRepo
1 -
Best Practices
1 -
Billing
1 -
Billing and Cost Management
1 -
Bronze Layer
1 -
Bug
1 -
Catalog
1 -
Certification
1 -
Certification Exam
1 -
Certification Voucher
1 -
CICD
2 -
Cli
1 -
Cloud_files_state
1 -
cloudera sql
1 -
CloudFiles
1 -
Cluster
3 -
clusterpolicy
1 -
Code
1 -
Community Group
1 -
Community Social
1 -
Compute
2 -
conditional tasks
1 -
Cost
1 -
Credentials
1 -
CustomLibrary
1 -
CustomPythonPackage
1 -
Data Engineering
2 -
Data Explorer
1 -
Data Ingestion & connectivity
1 -
DataAISummit2023
1 -
DatabrickHive
1 -
databricks
2 -
Databricks Academy
1 -
Databricks Alerts
1 -
Databricks Audit Logs
1 -
Databricks Certified Associate Developer for Apache Spark
1 -
Databricks Cluster
1 -
Databricks Clusters
1 -
Databricks Community
1 -
Databricks connect
1 -
Databricks Dashboard
1 -
Databricks delta
1 -
Databricks Delta Table
2 -
Databricks Documentation
1 -
Databricks JDBC
1 -
Databricks Job
1 -
Databricks jobs
2 -
Databricks Lakehouse Platform
1 -
Databricks notebook
1 -
Databricks Notebooks
2 -
Databricks Platform
1 -
Databricks Pyspark
1 -
Databricks Python Notebook
1 -
Databricks Repo
1 -
Databricks SQL
1 -
Databricks SQL Alerts
1 -
Databricks SQL Warehouse
1 -
Databricks UI
1 -
Databricks Unity Catalog
3 -
Databricks Workflow
2 -
Databricks Workflows
2 -
Databricks workspace
1 -
DatabricksJobCluster
1 -
DataDays
1 -
DataMasking
2 -
dbdemos
1 -
DBRuntime
1 -
DDL
1 -
deduplication
1 -
Delt Lake
1 -
Delta
7 -
Delta Live Pipeline
3 -
Delta Live Table
5 -
Delta Live Table Pipeline
5 -
Delta Live Table Pipelines
4 -
Delta Live Tables
6 -
Delta Sharing
2 -
deltaSharing
1 -
denodo
1 -
Deny assignment
1 -
Devops
1 -
DLT
8 -
DLT Pipeline
6 -
DLT Pipelines
5 -
DLTCluster
1 -
Documentation
2 -
Dolly
1 -
Download files
1 -
dropduplicatewithwatermark
1 -
Dynamic Variables
1 -
Engineering With Databricks
1 -
env
1 -
External Sources
1 -
External Storage
2 -
FAQ for Databricks Learning Festival
1 -
Feature Store
1 -
Filenotfoundexception
1 -
Free trial
1 -
GCP Databricks
1 -
Getting started
1 -
glob
1 -
Good Documentation
1 -
Google Bigquery
1 -
hdfs
1 -
Help
1 -
How to study Databricks
1 -
informatica
1 -
Jar
1 -
Java
1 -
JDBC Connector
1 -
Job Cluster
1 -
Job Task
1 -
Kubernetes
1 -
Lineage
1 -
LLMs
1 -
Login
1 -
Login Account
1 -
Machine Learning
1 -
MachineLearning
1 -
masking
1 -
Materialized Tables
2 -
Medallion Architecture
1 -
Metastore
1 -
MlFlow
2 -
Mlops
1 -
Model Serving
1 -
Model Training
1 -
Mount
1 -
Networking
1 -
nic
1 -
Okta
1 -
ooze
1 -
os
1 -
Password
1 -
Permission
1 -
Permissions
1 -
personalcompute
1 -
Pipeline
1 -
policies
1 -
PostgresSQL
1 -
Pricing
1 -
pubsub
1 -
Pyspark
1 -
Python
1 -
Quickstart
1 -
RBAC
1 -
Repos Support
1 -
Reserved VM's
1 -
Reset
1 -
run a job
1 -
runif
1 -
S3
1 -
SAP SUCCESS FACTOR
1 -
Schedule
1 -
SCIM
1 -
Serverless
1 -
Service principal
1 -
Session
1 -
Sign Up Issues
2 -
Significant Performance Difference
1 -
Spark
2 -
sparkui
2 -
Splunk
1 -
sqoop
1 -
Start
1 -
Stateful Stream Processing
1 -
Storage Optimization
1 -
Structured Streaming ForeachBatch
1 -
Summit23
2 -
Support Tickets
1 -
Sydney
2 -
Table Download
1 -
tabrikck
1 -
Tags
1 -
Troubleshooting
1 -
ucx
2 -
Unity Catalog
1 -
Unity Catalog Error
2 -
UntiyCatalog
1 -
Update
1 -
user groups
1 -
Venicold
3 -
volumes
2 -
Voucher Not Recieved
1 -
Watermark
1 -
Weekly Documentation Update
1 -
with open
1 -
Women
1 -
Workflow
2 -
Workspace
2
- « Previous
- Next »