cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

How can we run scala in a jupyter notebook?

Kaniz_Fatma
Community Manager
Community Manager
 
1 ACCEPTED SOLUTION

Accepted Solutions

Kaniz_Fatma
Community Manager
Community Manager

Step1: install the package

pip install spylon-kernel

Step2: create a kernel spec

This will allow us to select the Scala kernel in the notebook.

python -m spylon_kernel install

Step3: start the jupyter notebook

ipython notebook

And in the notebook we select 

New -> spylon-kernel

This will start our Scala kernel.

Step4: testing the notebook

Letโ€™s write some Scala code:

val x = 2
 
val y = 3
 
x+y
 

The output should be something similar to the result in the below image.

image" data-fileid="0698Y00000JFV3YQAXAs you can see, it also starts the spark components. For this, please make sure you have SPARK_HOME set up.

Now we can even use spark. Letโ€™s test it by creating a data set:

val data = Seq((1,2,3), (4,5,6), (6,7,8), (9,19,10))
val ds = spark.createDataset(data)
ds.show()

This should output a simple data frame:

image" data-fileid="0698Y00000JFV4MQAXAnd we can even use python in this kernel using the command 

%python
 
 :
 
%%python
 
x=2
 
print(x)

For more info, you can visit the spylon-kernel Github page.

The notebook with the code above is available here.

View solution in original post

1 REPLY 1

Kaniz_Fatma
Community Manager
Community Manager

Step1: install the package

pip install spylon-kernel

Step2: create a kernel spec

This will allow us to select the Scala kernel in the notebook.

python -m spylon_kernel install

Step3: start the jupyter notebook

ipython notebook

And in the notebook we select 

New -> spylon-kernel

This will start our Scala kernel.

Step4: testing the notebook

Letโ€™s write some Scala code:

val x = 2
 
val y = 3
 
x+y
 

The output should be something similar to the result in the below image.

image" data-fileid="0698Y00000JFV3YQAXAs you can see, it also starts the spark components. For this, please make sure you have SPARK_HOME set up.

Now we can even use spark. Letโ€™s test it by creating a data set:

val data = Seq((1,2,3), (4,5,6), (6,7,8), (9,19,10))
val ds = spark.createDataset(data)
ds.show()

This should output a simple data frame:

image" data-fileid="0698Y00000JFV4MQAXAnd we can even use python in this kernel using the command 

%python
 
 :
 
%%python
 
x=2
 
print(x)

For more info, you can visit the spylon-kernel Github page.

The notebook with the code above is available here.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group