cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

User16790091296
by Contributor II
  • 2806 Views
  • 1 replies
  • 2 kudos

How to restart a cluster on databricks using databricks-CLI?

I'm trying to restart an existing cluster in Databricks on Azure using databricks-cli.I'm using the following command:databricks clusters restart {"cluster_id": "0710-121255-liner30"}But it gives giving me this error:Error: Missing option "--cluster-...

  • 2806 Views
  • 1 replies
  • 2 kudos
Latest Reply
User16766737456
New Contributor III
  • 2 kudos

Can you try:databricks clusters restart --cluster-id <the-cluster-id>$ databricks clusters restart --help Usage: databricks clusters restart [OPTIONS]   Restarts a Databricks cluster given its ID.   If the cluster is not currently in a RUNNING st...

  • 2 kudos
MadelynM
by New Contributor III
  • 5146 Views
  • 1 replies
  • 0 kudos

Delta Live Tables + S3 | 5 tips for cloud storage with DLT

You’ve gotten familiar with Delta Live Tables (DLT) via the quickstart and getting started guide. Now it’s time to tackle creating a DLT data pipeline for your cloud storage–with one line of code. Here’s how it’ll look when you're starting:CREATE OR ...

Workflows-Left Nav Workflows
  • 5146 Views
  • 1 replies
  • 0 kudos
Latest Reply
MadelynM
New Contributor III
  • 0 kudos

Tip #3: Use JSON cluster configurations to access your storage locationKnowledge check: How do I modify DLT settings using JSON? Delta Live Tables settings are expressed as JSON and can be modified in the Delta Live Tables UI [AWS] [Azure][GCP].Examp...

  • 0 kudos
Deepak_Goldwyn
by New Contributor III
  • 2660 Views
  • 4 replies
  • 2 kudos

Resolved! Create Jobs and Pipelines in Workflows using API

I am trying to create Databricks Jobs and Delta live table(DLT) pipelines by using Databricks API.I would like to have the JSON code of Jobs and DLT in the repository(to configure the code as per environment) and execute the Databricks API by passing...

  • 2660 Views
  • 4 replies
  • 2 kudos
Latest Reply
Deepak_Goldwyn
New Contributor III
  • 2 kudos

Hi Jose,Yes it answered my question. I am indeed using JSON file to create Jobs and pipelinesThanks.

  • 2 kudos
3 More Replies
huggies_23
by New Contributor
  • 572 Views
  • 0 replies
  • 0 kudos

Is it possible to specify a specific branch commit when deploying repo to a workspace via the Databricks CLI?

I would like to know if it is possible to include a specific commit identifier when updating a repo in a workspace via the Databricks CLI.Why? Currently we use the repos CLI to push updates to code throughout dev, test and prod (testing along the wa...

  • 572 Views
  • 0 replies
  • 0 kudos
Taha_Hussain
by Valued Contributor II
  • 5645 Views
  • 2 replies
  • 6 kudos

Resolved! Create a Dashboard: How do I visualize data with Databricks SQL or my BI tool?

Databricks SQL helps query and visualize data so you can share real-time business insights with built-in dashboards or your favorite BI tools.This post helps you create queries, visualizations and dashboards and connect to your BI tools for deeper da...

Databricks SQL Locked DBSQL Create A Query Data Explorer
  • 5645 Views
  • 2 replies
  • 6 kudos
Latest Reply
Anonymous
Not applicable
  • 6 kudos

Thanks for the information, I will try to figure it out for more. Keep sharing such informative post keep suggesting such post.

  • 6 kudos
1 More Replies
Taha_Hussain
by Valued Contributor II
  • 805 Views
  • 0 replies
  • 3 kudos

Register for Databricks Office HoursAugust 17 & August 31 from 8:00am - 9:00am PT | 3:00pm - 4:00pm GMT. Databricks Office Hours connects you dire...

Register for Databricks Office HoursAugust 17 & August 31 from 8:00am - 9:00am PT | 3:00pm - 4:00pm GMT.Databricks Office Hours connects you directly with experts to answer your Databricks questions.Join us to: • Troubleshoot your technical questions...

  • 805 Views
  • 0 replies
  • 3 kudos
Dua14
by New Contributor
  • 913 Views
  • 2 replies
  • 1 kudos

Databricks and AWS Cloud watch agent issue

I'm facing problem while connecting Data bricks with AWS cloud watch, I want to send certain logs to cloud watch but seems like there is some connectivity issue between the 2 parties

  • 913 Views
  • 2 replies
  • 1 kudos
Latest Reply
Anonymous
Not applicable
  • 1 kudos

Hi @Tushar Dua​ , please follow the below blog which has details on how to monitor Databricks using Cloudwatch.How to Monitor Databricks with AWS CloudWatch

  • 1 kudos
1 More Replies
RonVBrown
by New Contributor
  • 1534 Views
  • 4 replies
  • 3 kudos
  • 1534 Views
  • 4 replies
  • 3 kudos
Latest Reply
Sivaprasad1
Valued Contributor II
  • 3 kudos

@RonVBrown (Customer)​ : Could you please refer below linkhttps://docs.databricks.com/data/data-sources/elasticsearch.htmlPlease try to use opens search library instead of the ES jar if it does not work.https://search.maven.org/artifact/org.opensearc...

  • 3 kudos
3 More Replies
118004
by New Contributor II
  • 369 Views
  • 0 replies
  • 0 kudos

Use databricks-sync import to migrate to new workspace

Hello,We are using the databricks-sync tool in an attempt to migrate from a legacy workspace into a new E2 account workspace. The tool exports json files successfully, but when I try to import, I receive various Terraform errors referencing undeclar...

  • 369 Views
  • 0 replies
  • 0 kudos
jgrgn
by New Contributor
  • 570 Views
  • 0 replies
  • 0 kudos

define notebook path from a parameter

Is there a way to define the notebook path based a parameter from the calling notebook using %run? I am aware of dbutils.notebook.run(), but would like to have all the functions defined in the reference notebook to be available in the calling noteboo...

  • 570 Views
  • 0 replies
  • 0 kudos
BradSheridan
by Valued Contributor
  • 1627 Views
  • 0 replies
  • 0 kudos

Workflow parameters

Hey everyone! I'm close but can't seem to figure this out. I'm trying to add 2 notebooks to a Databricks Job. Instead of the first command in both notebooks being a connection to an RDS/Redshift cluster, I'd prefer to make that connection once and ha...

  • 1627 Views
  • 0 replies
  • 0 kudos
lei_armstrong
by New Contributor II
  • 5616 Views
  • 7 replies
  • 5 kudos

Executing Notebooks - Run All Cells vs Run All Below

Due to dependencies, if one of our cells errors then we want the notebook to stop executing.We've noticed some odd behaviour when executing notebooks depending on if "Run all cells in this notebook" is selected from the header versus "Run All Below"....

  • 5616 Views
  • 7 replies
  • 5 kudos
Latest Reply
pinecone
New Contributor II
  • 5 kudos

I second this request. It's odd that the behaviour is different when running all vs. running all below. Please make it consistent and document properly.

  • 5 kudos
6 More Replies
palzor
by New Contributor III
  • 438 Views
  • 0 replies
  • 2 kudos

What is the best practice while loading delta table , do I infer the schema or provide the schema?

I am loading avro files into the detla tables. I am doing this for multiple tables and some files are big like (2-3GB) and most of them are small like in few MBs.I am using autoloader to load the data into the delta tables.My question is:What is the ...

  • 438 Views
  • 0 replies
  • 2 kudos
anisha_93
by New Contributor II
  • 4143 Views
  • 2 replies
  • 1 kudos

Error in SQL statement: KeyProviderException: Failure to initialize configuration

I have a source delta table from which I have selectively granted access to a particular pool id(can be thought of a dummy user). From the pool id interface, whenever I am running a select on any of the tables, even though it has access to, is faili...

  • 4143 Views
  • 2 replies
  • 1 kudos
Latest Reply
alicewong20
New Contributor II
  • 1 kudos

Hello all,I got the same problem. Does anyone help?

  • 1 kudos
1 More Replies
Dicer
by Valued Contributor
  • 2512 Views
  • 4 replies
  • 3 kudos

Resolved! Azure Databricks: Failed to extract data which is between two timestamps within those same dates using Pyspark

Data type:AAPL_Time: timestampAAPL_Close: floatRaw Data:AAPL_Time AAPL_Close 2015-05-11T08:00:00.000+0000 29.0344 2015-05-11T08:30:00.000+0000 29.0187 2015-05-11T09:00:00.000+0000 29.0346 2015-05-11T09:3...

  • 2512 Views
  • 4 replies
  • 3 kudos
Latest Reply
Anonymous
Not applicable
  • 3 kudos

Another thing to try is the hour() and minute() functions will return integers.

  • 3 kudos
3 More Replies
Labels
Top Kudoed Authors