Showing results for 
Search instead for 
Did you mean: 

Join us at the Data + AI World Tour Mumbai

Published on ‎09-13-2023 10:56 PM by Community Manager | Updated on ‎09-13-2023 10:59 PM

Explore the latest advancements, hear real-world case studies and discover best practices that deliver data and AI transformation. From the Databricks Lakehouse Platform to open source technologies including LLMs, Apache Spark™, Delta Lake, MLflow and more — the practitioner's track at the World Tour has all the information you need to accelerate and enhance your work. 

Join us to discover best practices across data engineering, data science, and advanced analytics on the lakehouse architecture.

Who should join?

Data engineer responsible for designing and managing data pipelines
Data scientist working on cutting-edge ML and AI challenges
ML engineer focused on deploying models into production
Data analyst in charge of unravelling insights
Data architect responsible for designing and securing data infrastructure
Business leader interested in understanding the value of a unified and open data platform

From inspiring keynotes to insightful sessions, the Data + AI World Tour Mumbai has something for you. Click here to learn more. 


Event has ended
You can no longer attend this event.

Thu, Sep 14, 2023 09:30 PM PDT
Fri, Sep 15, 2023 05:30 AM PDT
Community Manager
Community Manager

🚀 Join the Databricks Community Booth Today at Data + AI World Tour Mumbai! 🚀

Are you ready for an incredible experience at the Data + AI World Tour Mumbai?

The Databricks Community Booth is your hub for data and AI, and we invite you to participate in this exciting journey!

📅 Date: Today, Friday, September 15, 2023

🌐 Location: Databricks Community Booth

Here's why you won't want to miss this opportunity:

🧠 Unleash Your Expertise: Share your knowledge, insights, and expertise by answering questions and providing solutions on the Databricks Community platform. It's a chance to showcase your skills and help fellow data enthusiasts.

🤔 The First Challenge Awaits: Our first question is waiting for your input: "How to read an Excel file using Databricks?" 📊 This is your moment to shine! Post your solution and be part of the vibrant Databricks community.

🌟 Why Join Us:

  • Learn and Teach: Explore innovative solutions while educating others.
  • Connect with Peers: Network with fellow data and AI professionals.
  • Access to Experts: Collaborate with Databricks experts.
  • Win Prizes: Participate in the challenge to win exciting prizes!

🔗 Ready to Get Started? Here's what you need to do:

  1. Visit the Databricks Community Booth: Click here to access the community platform.

  2. Find the Challenge: Look for the challenge titled "How to read an Excel file using Databricks." It's your opportunity to shine!

  3. Post Your Solution: Share your solution and insights. Help your fellow community members by providing valuable answers.

  4. Engage and Collaborate: Don't forget to discuss and collaborate with others to find the best solutions.

🌐 Stay Connected: Follow us on social media for daily updates and highlights.

Use the hashtag #DataAIWorldTourMumbai to join the conversation.

🏆 Who Will Be the Hero of the Day? It could be YOU! Your knowledge can make a difference and solve real-world challenges.

Join us today at the Databricks Community Booth, dive into the world of data and AI, and let's make this event a remarkable success together!

See you at the Databricks Community Booth! 💡🚀

(Note: The Databricks Community Booth is accessible to registered Data + AI World Tour Mumbai attendees.)

We can read it through Databricks File System or Storage like Amazon S3

Databricks is scalable and can be optimized for all the Data solutions.

New Contributor II

Databricks is great for data engineers to start working with

New Contributor II

Great start of the day at Databricks AI world tour. Thanks for team data bricks for such a great start today.




Looking forward to learn more.

New Contributor II

I attended the Databricks Data + AI Summit in Mumbai, and it was an incredibly informative experience. The summit featured a lineup of industry experts who shared invaluable insights into various aspects of data engineering and AI. The summit provided a comprehensive overview of what a lakehouse architecture entails and how it combines the strengths of data lakes and data warehouses. Industry experts shared their experiences and success stories of implementing lakehouse architecture in various . They showcased real-world examples of how ETL pipelines and lakehouse architecture were implemented to solve complex data challenges. 

We can read it through Databricks File System or Storage like Amazon S3

New Contributor II

Great start of the day at Databricks AI world tour. Thanks for team data bricks for such a great start today.


Looking forward to learn more.



Excellent product 

New Contributor II

Data session is good

New Contributor II

Databricks works everytime 

New Contributor II

Come here for the first time

New Contributor II

One stop solution for governance and insight 

New Contributor II

What is use in AI use in containers 

Visitor II

I have used databricks for spark implementation and it's really amazing as we get all the implementation handy to work with. While working with large files /Big data, it is always challenging task to load data with local. While working with spark integration with databricks, we were able to find out an easier way to work with data without integrating any spark dependency. It is really amazing. 

Keep it up team

Visitor II

Great to know about databricks .it's really help.

New Contributor II

To read an Excel file using Databricks, you can use the Databricks Runtime's built-in support for reading various file formats, including Excel. Here are the steps to do it:


1. Upload the Excel File : First, upload your Excel file to a location that Databricks can access, such as DBFS (Databricks File System) or an external storage system like Azure Blob Storage or AWS S3.


2. Create a Cluster: If you don't already have a Databricks cluster, create one.


3. Create a Notebook : Create a Databricks notebook where you will write your code.

4. Load the Excel File: Use the appropriate library and function to load the Excel file. Databricks supports multiple libraries for this purpose, but one common choice is using the `pandas` library in Python. Here's an example using `pandas`:


# Import the necessary libraries

import pandas as pd

# Specify the path to your Excel file

excel_file_path = "/dbfs/path/to/your/excel/file.xlsx" # Replace with your file path

# Use pandas to read the Excel file

df = pd.read_excel(excel_file_path)


# Show the first few rows of the DataFrame to verify the data



5. Execute the Code: Run the code in your Databricks notebook. It will read the Excel file and load it into a DataFrame (in this case, using `pandas`).

6. Manipulate and Analyze Data : You can now use the `df` DataFrame to perform data manipulations, analysis, or any other operations you need within your Databricks notebook.

7. Save Results : If you need to save any results or processed data, you can do so using Databricks' capabilities, whether it's saving to a new Excel file, a database, or another storage location.

  • Make sure to configure your Databricks environment and notebook with the necessary dependencies if you're using libraries other than `pandas` for reading Excel files. Also, adjust the file path to match the location of your Excel file within your Databricks environment
Visitor II

We can read it through Databricks File System or Storage like Amazon S3

Visitor II

Describe the files from databases