Using the tc Server build pack for Pivotal Cloud Foundry 1.3

On Pivotal network you will find various build packs you can download and apply to PCF and use for your applications outside of the shipped build packs, using the link below.

https://network.pivotal.io/products/pivotal-cf

I am going to show how you would take one of these build packs , install it and then consume it from an application. In this demo I am going to use "tc server buildpack (offline) v2.4"

1. (more...)

GemFireXD*Web announced on Pivotal Blog

Read about here.

Connecting to Pivotal Cloud Foundry Ops Metrics using Java VisualVM

The Pivotal Ops Metrics tool is a JMX extension for Elastic Runtime. Pivotal Ops Metrics collects and exposes system data from Cloud Foundry components via a JMX endpoint. Use this system data to monitor your installation and assist in troubleshooting. Below is the tile once installed and available with Pivotal Cloud Foundry Ops Manager


Once installed and configured, metrics for Cloud Foundry components automatically report to the JMX endpoint. Your JMX client uses the credentials (more...)

SQLShell accessing Pivotal GemFire XD 1.3

I stumbled open SQLShell recently as per the URL below. Below I will show how you can connect to Pivotal GemFireXD using SQLShell. I used this to export query results using CSV output.

http://software.clapper.org/sqlshell/users-guide.html

Note: Assuming SQLShell is already installed and instructions below are for Mac OSX

1. Create a file in $HOME/.sqlshell/config as shown below, I just took the sample it ships with. Notice how I have added an alias (more...)

Spring XD Pivotal Gemfire Sink Demo

Spring XD is a unified, distributed, and extensible system for data ingestion, real time analytics, batch processing, and data export. The project's goal is to simplify the development of big data applications.

There are 2 implementation of the gemfire sink: gemfire-server and gemfire-json-server. They are identical except the latter converts JSON string payloads to a JSON document format proprietary to GemFire and provides JSON field access and query capabilities. If you are not using JSON, (more...)

Pivotal GemFire 8 – Starting a Locator / Cache Server in IntelliJ 13.x

In this post I am going to show how we can use the following classes to launch a Pivotal GemFire  locator / member from code directly within IntelliJ IDEA.

com.gemstone.gemfire.distributed.LocatorLauncher API
com.gemstone.gemfire.distributed.ServerLauncher API

1. Add the GemFire 8 maven REPO to your project to ensure we pull the required JAR files.
  
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven. (more...)

Creating a Pivotal GemFireXD Data Source Connection from IntelliJ IDEA 13.x

In order to create a Pivotal GemFireXD Data Source Connection from IntelliJ 13.x , follow the steps below. You will need to define a GemFireXD driver , prior to creating the Data Source itself.

1. Bring up the Databases panel.

2. Define a GemFireXD Driver as follows


3. Once defined select it by using the following options. Your using the Driver you created at #2 above

+ -> Data Source -> com.pivotal.gemfirexd.jdbc. (more...)

Variable in list with Postgres JDBC and Greenplum

I previously blogged on how to create a variable JDBC IN list with Oracle. Here is how you would do it with Pivotal Greenplum. Much easier , without having to write a function. In the Greenplum demo below we use the any function combined with string_to_array

http://theblasfrompas.blogspot.com.au/2008/02/variable-in-list-with-oracle-jdbc-and.html

Code as follows
  
import java.sql.*;
import java.sql.DriverManager;

/**
* Created by papicella on 4/09/2014.
*/
public class VariableInListGreenplum
{

public (more...)

REST with Pivotal GemFire 8.0

Pivotal GemFire 8.0 now includes REST support. You can read more about it as follows

http://gemfire.docs.pivotal.io/latest/userguide/gemfire_rest/book_intro.html#concept_7628F498DB534A2D8A99748F5DA5DC94

Here is how we set it up and some quick examples showing how it works with some Region data
In the example below I have PDX setup for the cache servers as shown below.
  
<!DOCTYPE cache PUBLIC
"-//GemStone Systems, Inc.//GemFire Declarative Caching 8.0//EN"
"http://www.gemstone.com/dtd/cache8_0.dtd">
<cache>
<pdx read-serialized="true">
<pdx-serializer>
(more...)

Dept/Emp POJO’s with sample data for Pivotal GemFire

I constantly blog about using DEPARTMENT/EMPLOYEE POJO'S with sample data. Here is how to create a file with data to load into GemFire to give you that sample set.

Note: You would need to create POJO'S for Department/Empployee objects that have getter/setter for the attributes mentioned below.

Dept Data

put --key=10 --value=('deptno':10,'name':'ACCOUNTING') --value-class=pivotal.au.se.deptemp.beans.Department --region=departments;
put --key=20 --value=('deptno':20,'name':'RESEARCH') --value-class=pivotal.au.se.deptemp.beans.Department --region=departments;
put --key=30 --value=('deptno':30,'name':'SALES') --value-class=pivotal.au.se. (more...)

Using HAWQ with PHD service in PCF 1.2

The following demo shows how to use the PCF 1.2 PHD service with HAWQ by loading data into the PCF PaaS platform.

1. First lets setup our ENV to use the correct version of HADOOP on our local laptop.

export HADOOP_INSTALL=/Users/papicella/vmware/software/hadoop/hadoop-2.0.5-alpha
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home

export PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin
export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"

export HADOOP_OPTS="$HADOOP_OPTS  -Djava.awt.headless=true -Djava.security.krb5.realm=-Djava.security.krb5.kdc="

export YARN_OPTS="$YARN_OPTS -Djava.security.krb5.realm=OX.AC.UK -Djava. (more...)

Pivotal Cloud Foundry Installed lets create an ORG / USER to get started

I installed Pivotal Cloud Foundry 1.2 recently and the commands below is what I run using the CLI to quickly create an ORG and a USER to get started with. Below assumes your connected as the ADMIN user to set a new ORG up.

Cloud Foundry CLI Commands as follows

cf api {cloud end point}
cf create-org pivotal
cf create-user pas pas
cf set-org-role pas pivotal OrgManager
cf target -o pivotal
cf create-space development
(more...)

Pivotal GemFireXD*Web, Web based Interface For GemFireXD

Pivotal GemFire XD bridges GemFire’s proven in-memory intelligence and integrates it with Pivotal HD 2.0 and HAWQ. This enables businesses to make prescriptive decisions in real-time, such as stock trading, fraud detection, intelligence for energy companies, or routing for the telecom industries.

You can read more about how GemFireXD and it's integration with PHD here.

https://www.gopivotal.com/big-data/pivotal-hd

While development team worked on GemFireXD I produced another open source web based tool named GemFireXD*Web. (more...)

Creating some Pivotal Cloud Foundry (PCF) PHD services

After installing PHD add on for Pivotal Cloud Foundry 1.1 I quickly created some development services for PHD using the CLI as shown below.

[Tue Apr 15 22:40:08 papicella@:~/vmware/pivotal/products/cloud-foundry ] $ cf create-service p-hd-hawq-cf free dev-hawq
Creating service dev-hawq in org pivotal / space development as pas...
OK
[Tue Apr 15 22:42:31 papicella@:~/vmware/pivotal/products/cloud-foundry ] $ cf create-service p-hd-hbase-cf free dev-hbase
Creating service dev-hbase in org pivotal / space development as pas...
OK
[Tue Apr (more...)

Pivotal Greenplum GPLOAD with multiple CSV files

I recently needed to setup a cron script which loaded CSV files from a directory into Greenplum every 2 minutes. Once loaded the files are moved onto Hadoop for archive purposes. The config below shows how to use GPLOAD data load utility which utilises GPFDIST.

1. Create a load table. In this example the data is then moved to the FACT table once the load is complete
  
drop table rtiadmin.rtitrans_etl4;

CREATE TABLE rtiadmin.rtitrans_etl4 (more...)

Pivotal Cloud Foundry using App Direct "newrelic" Monitoring Service

PCF AWS marketplace provides app direct services and in this example I am going to use the "newrelic" monitoring service to monitor my spring based java application. It's really this simple.

1. Create a service as shown below.

[Tue Mar 04 17:19:34 papicella@:~/cfapps/spring-travel ] $ cf create-service newrelic standard dev-newrelic

2. Create a manifest.yml for my spring application which uses the new relic service above.

applications:
- name: pas-springtravel 
  memory: 1024M 
(more...)

Deploying Spring MVC application to Cloud Foundry from IntelliJ IDEA

I previously showed how to create a connection in IntelliJ IDEA to Cloud Foundry v2 in the post below.

http://theblasfrompas.blogspot.com.au/2014/02/intellij-idea-version-13-now-includes.html

With a Cloud Foundry CLOUD connection we can now PUSH our application directly from the IDE as shown below.

1. Create a run configuration for your project as shown below. We also specify the memory and number of instances on this page as part of the push  / deployment process.



2. (more...)

IntelliJ IDEA version 13 now includes CloudFoundry Connection

Just installed IntelliJ IDEA version 13 and found that it now includes a CloudFoundry connection type. You define it under IDE settings as shown below.


Will test deploying to the publicly hosted AWS Cloud Foundry using this connection at some stage.

PCF (Pivotal Cloud Foundry) cf push multiple applications using manifest file

By creating a manifest as follows we can push multiple applications in one go as shown below.

1. manifest.yml

applications:
- name: pas-props
  memory: 256M
  instances: 1
  host: pas-props
  domain: cfapps.io
  path: ./props.war
- name: pas-httpsession
  memory: 256M
  instances: 1
  host: pas-httpsession
  domain: cfapps.io
  path: ./haclusterdemo.war

2. Push as follows

[Thu Feb 20 14:48:45 papicella@:~/vmware/pivotal/products/cloud-foundry/apps/other ] $ cf push -f (more...)

PCF (Pivotal Cloud Foundry) Using the new CLI to bind services from a cf push

I previously blogged how to PUSH an application into PCF , then bind services as shown below.


If you want to do this in one step simply create a manifest file as shown below. This example isn't the spring books application but shows you what your manifest.yml would look like with services included so you could then create one similar to that below.

1. Create a manifest.yml file (more...)