- 1866 Views
- 0 replies
- 0 kudos
Hi Team ,Unity catalog is not enabled in our workspace, We would like to know the billing usage information per user ,could you please help us how to get these details( by using notebook level script).Regards,Phanindra
- 1866 Views
- 0 replies
- 0 kudos
- 10224 Views
- 1 replies
- 0 kudos
Hi all,tl;dr I ran the following on a docker-backed personal compute instance (running 13.3-LTS)```%sqlUSE CATALOG hail;USE SCHEMA volumes_testing;CREATE VOLUME 1kg COMMENT 'Testing 1000 Genomes volume';```But this gives```ParseException: [UC_VOLU...
- 10224 Views
- 1 replies
- 0 kudos
Latest Reply
Resolved with the setting "spark.databricks.unityCatalog.volumes.enabled" = "true"
by
nyck33
• New Contributor II
- 3022 Views
- 1 replies
- 0 kudos
I just emailed the onboarding-help email account to ask for an extension for 2 weeks as I want to complete the Data Engineer course to prepare for my new position. I have 2 accounts where the trial expired, one community account which cannot be used ...
- 3022 Views
- 1 replies
- 0 kudos
Latest Reply
is what happened when trying to sign up with another email.
- 5111 Views
- 1 replies
- 0 kudos
Source data looks like: {
"IntegrityLevel": "16384",
"ParentProcessId": "10972929104936",
"SourceProcessId": "10972929104936",
"SHA256Hash": "a26a1ffb81a61281ffa55cb7778cc3fb0ff981704de49f75f51f18b283fba7a2",
"ImageFileName": "\\Device\\Harddisk...
- 5111 Views
- 1 replies
- 0 kudos
Latest Reply
Thanks for confirming that the readStream.withColumn() approach is the best available option. Unfortunately, this will force me to maintain a separate notebook for each of the event types, but it does work. I was hoping to create just one paramet...
- 5565 Views
- 1 replies
- 0 kudos
from pyspark.sql import SparkSessionfrom pyspark import SparkContext, SparkConffrom pyspark.storagelevel import StorageLevelspark = SparkSession.builder.appName('TEST').config('spark.ui.port','4098').enableHiveSupport().getOrCreate()df4 = spark.sql('...
- 5565 Views
- 1 replies
- 0 kudos
Latest Reply
Thank you so much for taking time and explaining the concepts
- 4173 Views
- 2 replies
- 2 kudos
I have a semicolon separated file in an ADLS container that's been added to Unity Catalog as an External location.When I run the following code on an all-purpose cluster, it runs ok and displays the schema.import dlt
@dlt.table
def test_data_csv():
...
- 4173 Views
- 2 replies
- 2 kudos
Latest Reply
@Retired_mod can you confirm that .option("delimiter", ";") is ignored when run in a DLT pipeline? (please see the post above) My colleage confirmed the behavior.
1 More Replies
- 1140 Views
- 0 replies
- 0 kudos
For my exam i have to do a small project for the company im interning at. I am creating a datawarehouse where i will have to transfer data from another database, and then transforming it to a star schema. would databricks be good for this, or is it t...
- 1140 Views
- 0 replies
- 0 kudos