Author Posts
SmartphonesTech

Bixby Routines, they actually work!

If you, for some reason have been living under a rock and don't know what Bixby is, it's the virtual assistant that Samsung has been trying to shove down your throat for a while now. But fortunately, with their latest smartphones, the Galaxy Note 10 series, they've given the option to silence Bixby forever, with the "Side Key" option. Today, we're not going to talk about how horrible or awesome the virtual assistant is, rather, how some features of Bixby are actually very useful, and work as expected. We're going to talk about Bixby Routines. On my Galaxy Note 9, I was using a third party app to map the Bixby key to open up the Google app, and had mapped Bixby to the double press of the key. But with my new Galaxy Note 10 Plus, I have the option to completely remove Bixby integration w...

Read More
Tech

How to automatically trigger AWS Lambda functions using CloudWatch

If you have AWS Lambda functions which need to be triggered periodically, like CRON jobs, there are many ways to achieve this. But I recently discovered a very easy and AWS-way of doing this, which makes life a lot easier. So, there are a lot of ways you can trigger Lambda functions periodically. One of the most common ways I've see people doing this is adding an API Gateway to the Lambda function, and then calling that API periodically as a CRON job from one of the machines in the setup. I actually thought this is how you're supposed do to that. Okay, let me make this clear. I'm not a DevOps guy. I just learn these things as and when the job requires me to. So the CloudWatch feature that I recently discovered is by no means a new feature introduced recently. It just so happens that I ...

Read More
Data ScienceTech

Apache Kafka Streams and Tables, the stream-table duality

In the previous post, we tried to understand the basics of Apache's Kafka Streams. In this post, we'll build on that knowledge and see how Kafka Streams can be used both as streams and tables. Stream processing has become very common in most modern applications today. You'll have a minimum of one stream coming into your system to be processed. And depending on your application, it'll mostly be stateless. But that's not the case with all applications. We'll have some sort of data enrichment going on in between streams. Suppose you have one stream of user activity coming in. You'll ideally have a user ID attached to each fact in that stream. But down the pipeline, user ID is not going to be enough for processing. Maybe you need more information about the user to be present in t...

Read More
Data ScienceTech

Getting started with Apache Kafka Streams

In the age of big data and data science, stream processing is very significant. So it's not at all surprising that every major organisation has at least one stream processing service. Apache has a few too, but today we're going to look at Apache's Kafka Streams. Kafka is a very popular pub-sub service. And if you've worked with Kafka before, Kafka Streams is going to be very easy to understand. And if you haven't got any idea of Kafka, you don't have to worry, because most of the underlying technology has been abstracted in Kafka Streams so that you don't have to deal with consumers, producers, partitions, offsets, and the such. In this post, we'll look that a few concepts of Kafka Streams, and maybe understand how it differs from other stream processing engines. First of all, Kafka...

Read More
Data ScienceTech

Put data to Amazon Kinesis Firehose delivery stream using Spring Boot

If you work with streams of big data which have to be collected, transformed, and analysed, you for sure would have heard of Amazon Kinesis Firehose. It is an AWS service used to load streams of data to data lakes or analytical tools, along with compressing, transforming, or encrypting the data. You can use Firehose to load streaming data to something like S3, or RedShift. From there, you can use a SQL query engine such as Amazon Athena to query this data. You can even connect this data to your BI tool and get real time analytics of the data. This could be very useful in applications where real time analysis of data is necessary. In this post, we'll see how we can create a delivery stream in Kinesis Firehose, and write a simple piece of Java code to put records (produce data) to t...

Read More
Data ScienceTech

How to Query Athena from a Spring Boot application?

In the last post, we saw how to query data from S3 using Amazon Athena in the AWS Console. But querying from the Console itself if very limited. We can't really do much with the data, and anytime we want to analyse this data, we can't really sit in front of the console the whole day and run queries manually. We need to automate the process. And what better way to do that than writing a piece of code? So in this post, we'll see how we can use the AWS Java SDK in a Spring Boot application and query the same sample data set from the previous post. We'll then log it to the console to make sure we're getting the right data. The Dependencies Before we get to the code, let's first get our dependencies right. I did the painstaking task of finding the right dependencies for this POC. All...

Read More
Data ScienceTech

Query data from S3 files using Amazon Athena

Amazon Athena is defined as "an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3) using standard SQL." So, it's another SQL query engine for large data sets stored in S3. This is very similar to other SQL query engines, such as Apache Drill. But unlike Apache Drill, Athena is limited to data only from Amazon's own S3 storage service. However, Athena is able to query a variety of file formats, including, but not limited to CSV, Parquet, JSON, etc. In this post, we'll see how we can setup a table in Athena using a sample data set stored in S3 as a .csv file. But for this, we first need that sample CSV file. You can download it here: sampleDataDownload Once you have the file downloaded, create a new bucket in ...

Read More
Data ScienceTech

Use Apache Drill with Spring Boot or Java to query data using SQL queries

In the last few posts, we saw how to connect Apache Drill with MongoDB and also how we can connect it to Kafka to query data using simple SQL queries. But when you want to move this to an actual real world project, you can't sit around querying data from a terminal all day long. You want to write a piece of code which does the dirty work for you. But how exactly do you use Apache Drill within your code? Today, we'll see how we can achieve this with Spring Boot, or pretty much any other Java program. The Dependencies For this POC, I'm going to write a simple Spring Boot CommandLineRunner program. But you can use pretty much any other Java framework or vanilla Java code for this. If you have a dependency management tool such as Maven or Gradle, you can just add the dependency ...

Read More
Data ScienceTech

Apache Drill vs. Apache Spark – Which SQL query engine is better for you?

If you are in the big data or data science or BI space, you might have heard about Apache Spark. A few of you might have also heard about Apache Drill, and a tiny bit of you might have actually worked with it. I discovered Apache Drill very recently. But since then, I've come to like what it has to offer. But the first thing that I wondered when I glanced over the capabilities of Apache Drill was, how is this different from Apache Spark? Can I use the two interchangeably? I did some research and found the answers. Here, I'm going to answer these questions for myself and maybe for you guys too. It is very important to understand that there is a fundamental difference between the two, how they are implemented, and what they are capable of. With Apache Drill, we write SQL quer...

Read More
Data ScienceTech

Analyse Kafka messages with SQL queries using Apache Drill

In the previous post, we figured out how to connect MongoDB with Apache Drill and query data with SQL queries. In this post, let's extend that knowledge and see how we can use similar SQL queries to analyse our Kafka messages. Configuring the Kafka storage plugin in Apache Drill is quite simple, very similar to how we configured the MongoDB storage plugin. First, we run our local instances of Apache Drill, Apache Zookeeper, and Apache Kafka. After this, head over to http://localhost:8047/storage, where we can enable the Kafka plugin. You should see it in the list to the right of the page. Click the Enable button. The storage plugin will be enabled. After this, we need to add a few configuration parameters to start querying data from Kafka. Click the Update button next to Kafka, whi...

Read More