cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 
Data + AI Summit 2024 - Data Engineering & Streaming

Forum Posts

User15787040559
by New Contributor III
  • 2156 Views
  • 1 replies
  • 0 kudos

What's the difference between Normalization and Standardization?

Normalization typically means rescales the values into a range of [0,1].Standardization typically means rescales data to have a mean of 0 and a standard deviation of 1 (unit variance).

  • 2156 Views
  • 1 replies
  • 0 kudos
Latest Reply
User16826994223
Honored Contributor III
  • 0 kudos

Normalization typically means rescales the values into a range of [0,1]. Standardization typically means rescales data to have a mean of 0 and a standard deviation of 1 (unit variance).A link which explains better is - https://towardsdatascience.com...

  • 0 kudos
User16826994223
by Honored Contributor III
  • 484 Views
  • 1 replies
  • 2 kudos

Issue: Your account {email} does not have the owner or contributor role on the Databricks workspace resource in the Azure portal 

Issue: Your account {email} does not have the owner or contributor role on the Databricks workspace resource in the Azure portal

  • 484 Views
  • 1 replies
  • 2 kudos
Latest Reply
sajith_appukutt
Honored Contributor II
  • 2 kudos

https://docs.microsoft.com/en-us/azure/databricks/scenarios/frequently-asked-questions-databricks#solution-1

  • 2 kudos
User16826994223
by Honored Contributor III
  • 1396 Views
  • 1 replies
  • 0 kudos

Streaming with Kafka with the same groupid

A kafka topic is having 300 partitions and I see two clusters are running and have the same group id, will the data be duplicate in my delta bonze layer

  • 1396 Views
  • 1 replies
  • 0 kudos
Latest Reply
sajith_appukutt
Honored Contributor II
  • 0 kudos

By default, each streaming query generates a unique group ID for reading data ( ensuring it's own  its own consumer group ) . In scenarios where you'd want to specify it (authz etc ) , it is not recommended to have two streaming applications specify ...

  • 0 kudos
User16826994223
by Honored Contributor III
  • 2987 Views
  • 3 replies
  • 0 kudos

Resolved! Delta lake Check points storage concept

In which format the Checkpoints are stored in storage and , how does it help in delta to increase performance.

  • 2987 Views
  • 3 replies
  • 0 kudos
Latest Reply
aladda
Honored Contributor II
  • 0 kudos

Great points above on how checkpointing helps with performance. In additional Delta Lake also provides other data organization strategies such as compaction, Z-ordering to help with both read and write performance of Delta Tables. Additional details ...

  • 0 kudos
2 More Replies
Srikanth_Gupta_
by Valued Contributor
  • 2141 Views
  • 2 replies
  • 0 kudos
  • 2141 Views
  • 2 replies
  • 0 kudos
Latest Reply
aladda
Honored Contributor II
  • 0 kudos

Temp Views and Global Temp Views are the most common way of sharing data across languages within a Notebook/Cluster

  • 0 kudos
1 More Replies
User15787040559
by New Contributor III
  • 3112 Views
  • 1 replies
  • 0 kudos

How many records does Spark use to infer the schema? entire file or just the first "X" number of records?

It depends. If you specify the schema it will be zero, otherwise it will do a full file scan which doesn’t work well processing Big Data at a large scale.CSV files Dataframe Reader https://spark.apache.org/docs/latest/api/python/reference/api/pyspark...

  • 3112 Views
  • 1 replies
  • 0 kudos
Latest Reply
aladda
Honored Contributor II
  • 0 kudos

As indicated there are ways to manage the amount of data being sampled for inferring schema. However as a best practice for production workloads its always best to define the schema explicitly for consistency, repeatability and robustness of the pipe...

  • 0 kudos
aladda
by Honored Contributor II
  • 797 Views
  • 1 replies
  • 0 kudos
  • 797 Views
  • 1 replies
  • 0 kudos
Latest Reply
aladda
Honored Contributor II
  • 0 kudos

Yes Convert to Delta allows for converting a parquet table into Delta format in place by adding a transaction log, infering the schema and also collecting stats to improve query performance - https://docs.databricks.com/spark/latest/spark-sql/languag...

  • 0 kudos
Anonymous
by Not applicable
  • 1089 Views
  • 3 replies
  • 0 kudos
  • 1089 Views
  • 3 replies
  • 0 kudos
Latest Reply
aladda
Honored Contributor II
  • 0 kudos

And to the earlier comment of Delta being an extension of Parquet. You can start with a dataset in Parquet format in S3 and do an in-place conversion to Delta without having to duplicate the data. See - https://docs.databricks.com/spark/latest/spark-...

  • 0 kudos
2 More Replies
Anonymous
by Not applicable
  • 1513 Views
  • 2 replies
  • 1 kudos
  • 1513 Views
  • 2 replies
  • 1 kudos
Latest Reply
aladda
Honored Contributor II
  • 1 kudos

You can also use tags to setup a chargeback mechanism within your organization for distributed billing - https://docs.databricks.com/administration-guide/account-settings/usage-detail-tags-aws.html

  • 1 kudos
1 More Replies
Anonymous
by Not applicable
  • 1466 Views
  • 2 replies
  • 0 kudos
  • 1466 Views
  • 2 replies
  • 0 kudos
Latest Reply
aladda
Honored Contributor II
  • 0 kudos

Per the comment above the cluster deletion mechanism is designed to keep your cluster config experience organized and not have proliferation of cluster config. Its also a good idea to setup cluster policies and leverage those as a guide for what kind...

  • 0 kudos
1 More Replies
User16830818469
by New Contributor
  • 3699 Views
  • 2 replies
  • 0 kudos

Databricks SQL Visualizations - export/embed

Is it possible to embed Databricks SQL Dashboards or specific widgets/visualization into a webpage?

  • 3699 Views
  • 2 replies
  • 0 kudos
Latest Reply
aladda
Honored Contributor II
  • 0 kudos

Databricks SQL also integrates with several popular BI tools over JDBC/ODBC which you can use as a mechanism to embed visualizations into a webpage

  • 0 kudos
1 More Replies
Anonymous
by Not applicable
  • 1294 Views
  • 1 replies
  • 0 kudos
  • 1294 Views
  • 1 replies
  • 0 kudos
Latest Reply
aladda
Honored Contributor II
  • 0 kudos

You can use libraries such as Seaborn, Bokeh, Matplotlib, Plotly for visualization inside of Python notebooks. See https://docs.databricks.com/notebooks/visualizations/index.html#visualizations-in-pythonAlso, Databricks has its own built-in visualiza...

  • 0 kudos
aladda
by Honored Contributor II
  • 6750 Views
  • 2 replies
  • 1 kudos
  • 6750 Views
  • 2 replies
  • 1 kudos
Latest Reply
aladda
Honored Contributor II
  • 1 kudos

Thanks @Digan Parikh​ . Credit to Tahir Fayyaz, Found a couple of different paths depending on whether you're looking to bring in raw GA data vs aggregated GA data. 1) For Raw You can bring in data from GA Universal Analytics 360 Paid version or GA ...

  • 1 kudos
1 More Replies
tj-cycyota
by New Contributor III
  • 1335 Views
  • 1 replies
  • 1 kudos

Can you use the Databricks API from a notebook?

I want to test out different APIs directly from a Databricks notebook instead of using Postman or CURL. Is this possible?

  • 1335 Views
  • 1 replies
  • 1 kudos
Latest Reply
Mooune_DBU
Valued Contributor
  • 1 kudos

If you're question is about using the Databricks API from within a databricks notebook, then the answer is yes of course, you can definitely orchestrate anything and invoke the REST API from a python notebook using the `requests` library already bake...

  • 1 kudos
Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!

Labels