Author Posts
Data ScienceTech

Apache Spark SQL User Defined Function (UDF) POC in Java

If you’ve worked with Spark SQL, you might have come across the concept of User Defined Functions (UDFs). As the name suggests, it’s a feature where you define a function, pretty straight forward. But how is this different from any other custom function that you write? Well, when you’re working with Spark in a distributed environment, your code is distributed across the cluster. For this to happen, your code entities have to be serializable, including the various functions you call. When you want to manipulate columns in your Dataset, Spark provides a variety of built-in functions. But there are cases when you want a custom implementation to work with your columns. For this, Spark provides UDF. But you should be warned, UDFs should be used as sparingly as possible. This is because ...

Read More
Data ScienceTech

Connect Apache Spark to your MongoDB database using the mongo-spark-connector

A couple of days back, we saw how we can connect Apache Spark to an Apache HBase database and query the data from a table using a catalog. Today, we’ll see how we can connect Apache Spark to a MongoDB database and get data directly into Spark from there. MongoDB provides us a plugin called the mongo-spark-connector, which will help us connect MongoDB and Spark without any drama at all. We just need to provide the MongoDB connection URI in the SparkConf object, and create a ReadConfig object specifying the collection name. It might sound complicated right now, but once you look at the code, you’ll understand how extremely easy this is. So, let’s look at an example. Source: mongodb.com The Dataset Before we look at the code, we need to make sure we have some data in our ...

Read More
Data ScienceTech

Connect Apache Spark to your HBase database (Spark-HBase Connector)

There will be times when you’ll need the data in your HBase database to be brought into Apache Spark for processing. Usually, you’ll query the database, get the data in whatever format you fancy, and then load that into Spark, maybe using the `parallelize()`function. This works, just fine. But depending on the size of the data, this could cause delays. At least it did for our application. So after some research, we stumbled upon a Spark-HBase connector in Hortonworks repository. Now, what is this connector and why should you be considering this? Source: https://spark.apache.org/ The Spark-HBase Connector (shc-core) The SHC is a tool provided by Hortonworks to connect your HBase database to Apache Spark so that you can tell your Spark context to pickup the data directly fro...

Read More
Tech

How you can improve your backend services’ performance using Apache Kafka

In most real world applications, we have a RESTful API service facing various client applications and a collection of backend services which process the data coming from those clients. Depending on the application, the architecture might have various services spread across multiple clusters of servers, and some form of queue or messaging service gluing them together. Today, we're going to talk about one such messaging service - Apache Kafka - and how it can improve the performance of your services. We're going to assume that we have at least two microservices, one for the APIs that are exposed to the world, and one which processes the requests coming in from the API microservice, but in an async fashion. Because this is async in nature, this might not be suitable for the kind of ap...

Read More
Tech

Why you should switch to Signal or Telegram from WhatsApp, Today

When we think of communicating with someone today, we mostly think of sending them a text message or a voice note on WhatsApp. And some other people who are least bothered about their privacy online, think of Facebook Messenger. But not all these users know what's happening with the messages they exchange on these platforms. Let's take a look at that. Before we start, let me admit, I am by no means an expert on security and privacy online. But I have done enough research for the last couple of years, which made me switch to Firefox and DuckDuckGo (with a lot of customized preferences on both), from Google's Chrome browser and search. I've made a lot of other such switches in my digital life. So not all that I write here is bullshit. My concern about WhatsApp was first raised whe...

Read More
Tech

Simple Apache Kafka Producer and Consumer using Spring Boot

Originally published here: https://medium.com/@contactsunny/simple-apache-kafka-producer-and-consumer-using-spring-boot-41be672f4e2b Before I even start talking about Apache Kafka here, let me answer your question after you read the topic — aren’t there enough posts and guides about this topic already? Yes, there are plenty of reference documents and how-to posts about how to create Kafka producers and consumers in a Spring Boot application. Then why am I writing another post about this? Well, in the future, I’ll be talking about some advanced stuff, in the data science space. Apache Kafka is one of the most used technologies and tools in this space. It kind of becomes important to know how to work with Apache Kafka in a real-world application. So this is an introductory po...

Read More
Tech

Keystroke Dynamics, What Is It?

For decades, we have been using the two-pronged key system for securing our electronic data and services. The two-pronged key we're talking about is the username/password combination. There are variations of this, of course. For example, instead of a username, you might be using your email address, or something called a user ID. But the concept remains the same. The username/password combination for security is over 50 years old. To be more precise, it was first implemented in the year 1961 at Massachusetts Institute of Technology (MIT). We have been using this security method for all kinds of data and services online, including but not limited to emails, banking, and gaming services.  But it's also true that it's been proved a lot many times that this kind of security doesn't r...

Read More
Data Science

What is multicollinearity?

Image from StaticsticsHowTo Multicollinearity is a term we often come across when we're working with multiple regression models. Even we have talked about it in our previous posts, but do we know what it actually means? Today, we'll try to understand that. In most real life problems, we usually have multiple features to work with. And not all of them are in the format that we, or the model, wants. For example, a lot of categorical features are usually in the text format. But as we already know, our models require the features to be numerical. For this, we will label encode the feature and if required, we'll even one hot encode them. But in some cases, we might have features whose values can be easily determined by the values of other features. In other words, we can see a very go...

Read More
Data Science

Overfitting and Underfitting models in Machine Learning

In most of our posts about machine learning, we've talked about overfitting and underfitting. But most of us don't yet know what those two terms mean. What does it acutally mean when a model is overfit, or underfit? Why are they considered not good? And how do they affect the accuracy of our model's predictions? These are some of the basic, but important questions we need to ask and get answers to. So let's discuss these two today. The datasets we use for training and testing our models play a huge role in the efficiency of our models. Its equally important to understand the data we're working with. The quantity and the quality of the data also matter, obviously. When the data is too less in the training phase, the models may fail to understand the patterns in the data, or fa...

Read More
Data Science

Different types of Validations in Machine Learning (Cross Validation)

Now that we know what is feature selection and how to do it, let's move our focus to validating the efficiency of our model. This is known as validation or cross validation, depending on what kind of validation method you're using. But before that, let's try to understand why we need to validate our models. Validation, or Evaluation of Residuals Once you are done with fitting your model to you training data, and you've also tested it with your test data, you can't just assume that its going to work well on data that it has not seen before. In other words, you can't be sure that the model will have the desired accuracy and variance in your production environment. You need some kind of assurance of the accuracy of the predictions that your model is putting out. For this, we need to val...

Read More