How to Use IAM authentication for RDS PostgreSQL with Glue ETL Jobs

Uncategorized
| Nov 21, 2019

Amazon RDS enables you to use AWS Identity and Access Management (IAM) to manage database access for Amazon RDS for PostgreSQL DB instances. It’s possible use the IAM authentication with Glue connections but it is not documented well, so I will demostrate how you can do it. In your Glue job, you can import boto3 library to call “generate_db_auth_token” method to generate a token and use it when connecting.

Here’s a simple Glue (more...)

How to Use AWS S3 bucket for Spark History Server

Uncategorized
| Nov 18, 2019

Since EMR Version 5.25, it’s possible to debug and monitor your Apache Spark jobs by logging directly into the off-cluster, persistent, Apache Spark History Server using the EMR Console. You do not need to anything extra to enable it, and you can access the Spark history even after the cluster is terminated. The logs are available for active clusters and are retained for 30 days after the cluster is terminated.

Although this is a (more...)

Lambda Function to Resize EBS Volumes of EMR Nodes

Uncategorized
| Oct 30, 2019

I have to start by saying that you should not use EMR as a persistent Hadoop cluster. The power of EMR lies in its elasticity. You should launch an EMR cluster, process the data, write the data to S3 buckets, and terminate the cluster. However, we see lot of AWS customers use the EMR as a persistent cluster. So I was not surprised when a customer told that they need to resize EBS volume automatically (more...)

Query a HBASE table through Hive using PySpark on EMR

Uncategorized
| Oct 15, 2019

In this blog post, I’ll demonstrate how we can access a HBASE table through Hive from a PySpark script/job on an AWS EMR cluster. First I created an EMR cluster (EMR 5.27.0, Hive 2.3.5, Hbase 1.4.0). Then I connected to the master node, executed “hbase shell”, created a HBASE table, and inserted a sample row:

create 'mytable','f1'
put 'mytable', 'row1', 'f1:name', 'Gokhan'

I logged in to hive and created (more...)

Amazon QLDB and the Missing Command Line Client

Uncategorized
| Sep 10, 2019

Amazon Quantum Ledger Database is is a fully managed ledger database which tracks all changes of user data and maintains a verifiable history of changes over time. It was announced at AWS re:Invent 2018 and now available in five AWS regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Tokyo).

You may ask why you would like to use QLDB (a ledger database) instead of using your traditional (more...)

Sample AWS Lambda Function to Monitor Oracle Database

Uncategorized
| Sep 4, 2019

I wrote a very simple AWS Lambda function to demonstrate how to connect an Oracle database, gather the tablespace usage information, and send these metrics to CloudWatch. First, I wrote this lambda function in Python and then I had to re-write it in Java. As you may know, you need to use cx_oracle module to connect Oracle Databases with Python. This extension module requires some libraries which are shipped by Oracle Database Client (oh God! (more...)

An Interesting Problem with ODI: Unable to retrieve user GUID

Uncategorized
| Apr 26, 2018

One of my customers had a problem about logging in to Oracle Data Integrator (ODI) Studio. Their ODI implementation is configured to use external authentication (Microsoft Active Directory). The configuration was done years ago. No one modified it since it’s done, in fact most people even do not remember how it’s configured. Everything was fine until they started to get “ODI-10192: Unable to retrieve user GUID” error.

They said they got the first error about (more...)

PySpark Examples #5: Discretized Streams (DStreams)

Uncategorized
| Apr 18, 2018

This is the fourth blog post which I share sample scripts of my presentation about “Apache Spark with Python“. Spark supports two different way for streaming: Discretized Streams (DStreams) and Structured Streaming. DStreams is the basic abstraction in Spark Streaming. It is a continuous sequence of RDDs representing stream of data. Structured Streaming is the newer way of streaming and it’s built on the Spark SQL engine. In next blog post, I’ll also (more...)

PySpark Examples #3-4: Spark SQL Module

Uncategorized
| Apr 17, 2018

In this blog post, I’ll share example #3 and #4 from my presentation to demonstrate capabilities of Spark SQL Module. As I already explained in my previous blog posts, Spark SQL Module provides DataFrames (and DataSets – but Python doesn’t support DataSets because it’s a dynamically typed language) to work with structured data.

First, let’s start creating a temporary table from a CSV file and run query on it. Like I did my (more...)

PySpark Examples #2: Grouping Data from CSV File (Using DataFrames)

Uncategorized
| Apr 16, 2018

I continue to share example codes related with my “Spark with Python” presentation. In my last blog post, I showed how we use RDDs (the core data structures of Spark). This time, I will use DataFrames instead of RDDs. DataFrames are distributed collection of data organized into named columns (in a structured way). They are similar to tables in relational databases. They also provide a domain specific language API to manipulate your distributed (more...)