One of the standout features of ballerina from most of the other programming language is the first class support for JSON and XML as built-in data types. Here I will be discussing on how to do JSON manipulations with Ballerina with ease. Defining
Wrote by Supun Setunga
What are Annotations? An annotation is a ballerina code snippet that can be attached to some ballerina code, and will hold some metadeta related to that attached code. Annotations are not executable, but can be used to alter the behavior of some executable
Wrote by Supun Setunga
Suppose you are writing an integration scenario (in Ballerina or any technology), and you need to test the end-to-end scenario you just wrote. Way to achieve this is to mock a back-end service, and test your integration flow, without having to hassle with
Wrote by Supun Setunga
Download: You can download ballerina runtime and tooling from http://ballerinalang.org/downloads/. Ballerina runtime contains the runtime environment (bre) needed to run ballerina main programs and ballerina services. Ballerina tooling distribution contains the runtime environement (bre), The Composer (visual editor), Docerina (for API document generation)
Wrote by Supun Setunga
In the previous post we discussed on how to connect jupyter notebook to pyspark. Further going forward, in this post I will discuss on how you can run python scripts, and analyze and build Machine Learning models on top of data stored in
Wrote by Supun Setunga
Prerequisites Install jupyter Download and uncompress spark 1.6.2 binary. Dowload pyrolite-4.13.jar Set Environment Variables open ~/.bashrc and add the following entries:  export PYSPARK_DRIVER_PYTHON=ipython export PYSPARK_DRIVER_PYTHON_OPTS='notebook' pyspark export PYSPARK_PYTHON=/home/supun/Supun/Softwares/anaconda3/bin/python export SPARK_HOME="/home/supun/Supun/Softwares/spark-1.6.2-bin-hadoop2.6" export PATH="/home/supun/Supun/Softwares/spark-1.6.2-bin-hadoop2.6/bin:$PATH" export PYTHONPATH=$SPARK_HOME/python/lib/py4j-0.9-src.zip:$PYTHONPATH export PYTHONPATH=$SPARK_HOME/python:$PYTHONPATH export PYTHONPATH=$SPARK_HOME/python/lib:$PYTHONPATH export SPARK_CLASSPATH=/home/supun/Downloads/pyrolite-4.13.jar If you are
Wrote by Supun Setunga
Prerequisites: Install python Install ipython notebook Create a directory as a workspace for the notebook, and navigate to it. Start python jupyter by running: jupyter notebook Create a new python notebook. To use Pandas Dataframe this notebook scipt, we first need to import
Wrote by Supun Setunga
This post will discuss on how to setup a fully distributed hbase cluster. Here we will not run zookeeper as a separate server, but will be using the zookeeper which is embedded in hbase itself. And our setup will consist of 1 master
Wrote by Supun Setunga
Here i will discuss on how to setup a fully distributed hadoop cluster with 1-master and 2 salves. Here the three nodes are setup in three different machines. Updating Hostnames To start off the things, lets first give hostnames to the three nodes.
Wrote by Supun Setunga
HeapDump: jmap -dump:live,format=b,file=<filename>.hprof <PID> Thread Dump: jstack <PID> > <filename>
Wrote by Supun Setunga
Login to mysql with your usernamse and password. eg: mysql u root -proot Then execute the following command: SELECT table_schema "DB Name", ROUND(SUM(data_length + index_length)/1024/1024, 2) "Size in MBs" FROM information_schema.tables GROUP BY table_schema; Here SUM(data_length + index_length) is in bytes. Hence we have
Wrote by Supun Setunga
Page 1 of 6123456Next »Last