Hiển thị các bài đăng có nhãn java. Hiển thị tất cả bài đăng
Hiển thị các bài đăng có nhãn java. Hiển thị tất cả bài đăng

Apache Kafka

1.) Overview


Apache Kafka is a distributed streaming platform. It is used for building real-time data platforms and streaming applications. In this blog, we will discuss how to install Kafka and work on some basic use cases.

This article was created using Apache Kafka version 2.12-2.1.0.

2.) Installation

Download and unpack Kafka from https://kafka.apache.org/downloads. 

2.1) Configuration

config/zookeeper.properties
  • Set the dataDir /tmp//kafka/zookeeper
config/server.properties
  • log.dirs=/tmp/kafka/logs
  • zookeeper.connect=localhost:2181
  • listeners=PLAINTEXT://localhost:9092
To test Kafka run the following commands.
>bin/zookeeper-server-start.sh config/zookeeper.properties
>bin/kafka-server-start.sh config/server.properties

The second command will start a new command prompt and you should see some logs in zookeeper.

3.) Kafka Topics

Create a topic:
>bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic clicks

View the topics:
>bin/kafka-topics.sh --list --zookeeper localhost:2181

Delete the topic (execute at the end):
>bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic clicks

4.) Sending and Receiving Messages

Send messages:
>bin/kafka-console-producer.sh --broker-list localhost:9092 --topic clicks
-Enter some messages here and leave the command open

Receive the messages:
>bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic clicks --from-beginning
-You should be able to receive the messages that haven't been read yet

5.) Multi Broker

Make 2 copies of config/server.properties. Set the following properties:

config/server-1.properties
  • broker.id=1
  • listeners=PLAINTEXT://:9093
  • log.dir=/tmp/kafka-logs-1
config/server-2.properties
  • broker.id=2
  • listeners=PLAINTEXT://:9094
  • log.dir=/tmp/kafka-logs-2
Start the 2 new broker in different terminals
>bin/kafka-server-start.sh config/server.1.properties
>bin/kafka-server-start.sh config/server.2.properties

Create a new topic that will be replicated on the original node plus the two new.
>bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic clicks-replicated

You can run the view topics command again (above).

We can also describe the newly created topic as we specified:
>bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic clicks-replicated
>bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic clicks

6.) Fault Tolerance

Now, we can send some messages to our replicated topic:
>bin/kafka-console-producer.sh --broker-list localhost:9092 --topic clicks-replicated

Read the message in the replicated topic:
>bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic clicks-replicated

Now, shut down the second node by ctrl + c in the command or close it.

Again, we can describe the replicated topic.
>bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic clicks-replicated

We can the messages again from the beginning (original and 1st node, node that the second node is off).
>bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic clicks-replicated
>bin/kafka-console-consumer.sh --bootstrap-server localhost:9093 --from-beginning --topic clicks-replicated

*Close all the terminals except zookeeper and the original topic using port 9092.

7.) Import / export data from and to a file using a connector

Kafka can also read and write from and to a file. Let's try that by using the default configurations.
  • connect-standalone.properties - is basically server.properties
  • connect-file-source.properties - specify the source file to read (default: test.txt, note topic value here)
  • connect-file-sink.properties - where to write (default: test.sink.txt)
Run the connector
>bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties config/connect-file-sink.properties
-Create a test.txt file where you run the connector and add some text to it. Make sure that you end with a newline. Otherwise, the last line will not be read.

Notice the log we should have something like:
[2019-01-13 16:17:09,799] WARN Couldn't find file test.txt for FileStreamSourceTask, sleeping to wait for it to be created (org.apache.kafka.connect.file.FileStreamSourceTask:109)
[2019-01-13 16:17:10,838] INFO Cluster ID: MYm1bMttRdCqG-njYXeO-w (org.apache.kafka.clients.Metadata:285)

There should be a newly created file with the same content named: test.sink.txt.

Note that you can still read the messages using the consumer. Topic=connect-test is from connect-file-source.properties:
>bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic connect-test --from-beginning

Modify the test.txt, adding "Hello World!" and your consumer should be able to pickup the message.
>{"schema":{"type":"string","optional":false},"payload":"Hello World!"}

*Terminate the consumer but leave server0 open.

8.) Streaming using WordCount app

Now let's create a new file with the following content:
>echo -e "The quick brown fox jumps over the lazy dog.\nThe quick brown fox jumps over the lazy dog." > file-input.txt

Create a new topic:
>bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic streams-plaintext-input

Send file data to the topic, it could come from a stream.
>bin/kafka-console-producer.sh --broker-list localhost:9092 --topic streams-plaintext-input < file-input.txt

Consume the input:
>bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic streams-plaintext-input --from-beginning

We can use the WordCount app package with Kafka to stream the data from the file we just created.
>bin/kafka-run-class.sh org.apache.kafka.streams.examples.wordcount.WordCountDemo

Consume the messages using String and Long deserializers:
>bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic streams-wordcount-output --from-beginning --formatter kafka.tools.DefaultMessageFormatter --property print.key=true --property print.value=true --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer

You should have an output similar to:
the 1
quick 1
brown 1
fox 1
jumps 1
over 1
the 2
lazy 1
dog. 1
the 3
quick 2
brown 2
fox 2
jumps 2
over 2
the 4
lazy 2
dog. 2

The Magic of Using Lombok in Spring


1. Overview

Lombok is a Java library that simplifies a lot of things for developers. Like the automatic creation of getters/setters, constructors, etc. For a more detailed list of features click on the reference below.

2. Lombok Installation


After that make sure to close Spring STS and run with -clean parameter to enable the plugin.
>sts.exe -clean

3. Spring Project

To demonstrate the power of Lombok library I created a Spring REST API demo project that features a REST API where the entity fields have auto getter/setter generation.

As a bonus, this project is also HATEOS enabled.

The project is available at https://github.com/czetsuya/Spring-Lombok.

References:

Secure Spring Boot REST Project with Keycloak

1. Overview

In this blog, we will cover the basics of securing a Spring project with Keycloak using keycloak-spring-boot-starter and keycloak-spring-security-adapter.

2. Limitation

Keycloak is already a well-documented topic that needs no further write up. Here's a link to the documentation: https://www.keycloak.org/documentation.html.

3. The Spring Boot Project

I'm using Spring STS so I created my project with it, but you can use the Spring initializer from the Spring website. 

Here's the content of the pom.xml file. Note that keycloak-spring-security-adapter. is already defined in keycloak-spring-boot-starter.

For a more detailed instruction on how to setup the Keycloak Spring boot starter you may check: https://www.keycloak.org/docs/latest/securing_apps/index.html#_spring_boot_adapter.

<properties>
<java.version>11</java.version>
<keycloak.version>4.8.1.Final</keycloak.version>
</properties>

<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
<dependency>
<groupId>org.keycloak</groupId>
<artifactId>keycloak-spring-boot-starter</artifactId>
</dependency>

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>

<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.keycloak.bom</groupId>
<artifactId>keycloak-adapter-bom</artifactId>
<version>${keycloak.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>

3.1 Configuration

There are actually 2 ways we can secure a Spring project with Keycloak.

3.1.1 Using Keycloak Spring Boot Starter

This is the standard approach where we define the keycloak client configurations from keycloak.json to application.yml or to the standard Spring configuration file.
keycloak:
enabled: true
realm: dev
auth-server-url: http://localhost:8083/auth
ssl-required: external
resource: dev-api
bearer-only: true
confidential-port: 0
use-resource-role-mappings: false
principal-attribute: preferred_username
cors: true
security-constraints:
- auth-roles:
- User
security-collections:
- name: unsecured
patterns:
- /users
- auth-roles:
- Admin
security-collections:
- name: secured
patterns:
- /admin
logging:
level:
org.apache.catalina: DEBUG

In this example configuration, we define 2 URL patterns /users and /admin which are both secured by their respective roles. Take note that security-constraint is composed of auth-roles and security-collections array.

Enabling the log on org.apache.catalina will let us see the security check on the given URL when we invoke the API.

At the same time, if we set the config resolver to KeycloakSpringBootConfigResolver, then we can also configure the HttpSecurity.

Below is part of the class that extends KeycloakWebSecurityConfigurerAdapter. Keycloak provides this base class for easier configuration as well as the @KeycloakConfiguration annotation.

@Bean
public KeycloakConfigResolver keycloakConfigResolver() {
return new KeycloakSpringBootConfigResolver();
}

@Override
protected void configure(HttpSecurity http) throws Exception {
super.configure(http);
http.cors() //
.and() //
.csrf().disable() //
.anonymous().disable() //
.sessionManagement().sessionCreationPolicy(SessionCreationPolicy.STATELESS) //
.and() //
.authorizeRequests() //
.antMatchers("/users*").hasRole("USER") //
.antMatchers("/admin*").hasRole("ADMIN") //
.anyRequest().denyAll(); //
}

3.1.2 Using Keycloak Spring Security Adapter

For Spring developers I think this is the mode where they are more familiar. Basically, it will use the configuration from keycloak.json (ignoring the settings in application.yml).

For this to work we need to add a dependency to our project:

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>

Delete the Keycloak related configurations in application.yml including the security constraints. And remove the keycloakConfigResolver bean, as this tells Spring to ignore the keycloak.json file. This will leave us with the security in method configure(HttpSecurity http), which is still good.

By default, the project will look for a keycloak.json file inside the WEB-INF folder, but since the project is of jar type, this folder is not available, so we need to set a system variable in Spring STS:

keycloak.configurationFile=classpath:keycloak.json



And make sure that we have the keycloak.json file inside our src/main/resources folder.

The complete source code is available at Github: https://github.com/czetsuya/Spring-Keycloak-with-REST-API

Enable HTTPS / SSL for Wildfly

Here are the steps I run through to enable SSL / HTTPS for Wildfly 14.

Notice that instead of generating a key / certificate pair we instead use a special type of container for java which is a keystore. A keystore is a single file that contains both the key and the certificate.

Assuming we are trying to secure the website broodcamp.com, here are the steps:
1.) Generate the key, in the Firstname and Lastname entry, enter your FQDN, which in our case broodcamp.com
>keytool -genkey -alias broodcamp.com -keyalg RSA -keystore keycloak.jks

2.) Convert the keystore to pkcs12 format.
>keytool -importkeystore -srckeystore keycloak.jks -destkeystore keycloak.jks -deststoretype pkcs12

3.) Generate a certificate request that we will submit to a  certificate broker like namecheap.com. We will be using a Comodo PositiveSSL certificate from namecheap: https://www.namecheap.com/security/ssl-certificates/comodo/positivessl.aspx.
>keytool -certreq -alias broodcamp.com -keystore keycloak.jks > keycloak.careq
*In order to validate the certificate, I use domain validation and added a CNAME. You can also use email, etc.

4.) After validation, you should received a zipped file from namecheap that contains 3 files, the certificate, the bundle and the p7b. We will use the p7b which already contain the certificate chain and import to our keystore.
>keytool -import -alias broodcamp.com -trustcacerts -file broodcamp_com.p7b -keystore keycloak.jks
keytool -list -v -keystore keycloak.jks

*Now we have a signed certificate and here's how it should look like in Windows.

Next series of steps are to modify Wildfly's standalone.xml file to enable SSL. 

5.) Modify ApplicationRealm and add the keystore.
After:   <security-realm name="ApplicationRealm"> add:
<ssl>
<keystore alias="server" generate-self-signed-certificate-host="localhost" key-password="password" keystore-password="password" path="application.keystore" relative-to="jboss.server.config.dir"></ssl>

6.) Search for:  <server name="default-server"> and add:

7.) If you're using java and you have a web.xml in your application, you need to enable the SSL transport. For example you have a resource manifest and want to secure it.
Manifest
/rest/manifest/manifest
CONFIDENTIAL

And before I forgot, if you are using Wildfly, it means you are running on port 8080 by default and 8443 for HTTPS. So make sure to redirect the request to 8443 and not 443.

Microservices in Spring

And so I was trying to learn microservices and since I'm from the JavaEE background it's given that I will first check: http://wildfly-swarm.io/tutorial. I found the code too complicated, how can I focus on the business problem if it's like that.

And so I shifted my attention to Spring Cloud, although I preferred understanding how things under, I would admit that having annotations to do things is a welcome change in my code.

And so here's the functional demo code I developed: https://github.com/czetsuya/Spring-Cloud-Eureka-Hystrix-Demo

PS: I'm too lazy to write a more detailed explanation :-) But importing the projects and running in the order I've mentioned should do the trick.

Install Java8 in Ubuntu

A set of commands to install Java8 on Ubuntu.


sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer

Now set JAVA_HOME in the environment variables.

sudo vi /etc/environment
// add
JAVA_HOME="/usr/lib/jvm/java-8-oracle"
// save

// reload
source /etc/environment

// check
echo $JAVA_HOME

Social Login using REST FB


This is an implementation tutorial on how we can use REST FB to enable facebook social login on our web application.

Basically, it's a project created from javaee7-war template.

To run this app you need to set up a Facebook application with callback url=/oath_callback

pom.xml - we need to define the rest fb dependency which is a java library for logging in facebook.
<dependency>
<groupId>com.restfb</groupId>
<artifactId>restfb</artifactId>
<version>2.9.0</version>
</dependency>

<dependency>
<groupId>org.jboss.spec.javax.servlet</groupId>
<artifactId>jboss-servlet-api_3.0_spec</artifactId>
<version>1.0.2.Final</version>
</dependency>

Callback Servlet -
package com.broodcamp.restfb.servlet;

import java.io.IOException;

import javax.inject.Inject;
import javax.servlet.RequestDispatcher;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import com.broodcamp.restfb.provider.FacebookProvider;
import com.restfb.DefaultFacebookClient;
import com.restfb.FacebookClient;
import com.restfb.Parameter;
import com.restfb.Version;
import com.restfb.types.User;

@WebServlet("/oath_callback")
public class OauthCallbackServlet extends HttpServlet {

private static final long serialVersionUID = 4400146595698418400L;

private static Logger log = LoggerFactory.getLogger(OauthCallbackServlet.class);

@Inject
private FacebookProvider facebookProvider;

private String code;

@Override
public void service(HttpServletRequest req, HttpServletResponse res) throws ServletException, IOException {
code = req.getParameter("code");
if (code == null || code.equals("")) {
throw new RuntimeException("ERROR: Didn't get code parameter in callback.");
}
String accessToken = facebookProvider.obtainAccessToken(code);
FacebookClient facebookClient = new DefaultFacebookClient(accessToken, Version.LATEST);
User facebookUser = facebookClient.fetchObject("me", User.class, Parameter.with("fields", "email,first_name,last_name,birthday"));
log.debug("FB User firstName={}, lastName={}, email={}, birthday={}", facebookUser.getFirstName(), facebookUser.getLastName(), facebookUser.getEmail(),
facebookUser.getBirthday());

RequestDispatcher dispatcher = req.getRequestDispatcher("account.jsf?accessToken=" + accessToken);
dispatcher.forward(req, res);
}
}

Facebook Provider - Provider class for initializing the facebook api.
package com.broodcamp.restfb.provider;

import javax.annotation.PostConstruct;
import javax.ejb.Singleton;
import javax.ejb.Startup;

import com.restfb.DefaultFacebookClient;
import com.restfb.FacebookClient;
import com.restfb.FacebookClient.AccessToken;
import com.restfb.Version;
import com.restfb.scope.FacebookPermissions;
import com.restfb.scope.ScopeBuilder;

@Singleton
@Startup
public class FacebookProvider {

private String appId = "xxx";
private String appSecret = "yyy";
private String redirectUrl = "http://localhost:8080/restfb-demo/oauth_callback";
private String loginDialogUrlString;

@PostConstruct
private void init() {
ScopeBuilder scopeBuilder = new ScopeBuilder();
scopeBuilder = scopeBuilder.addPermission(FacebookPermissions.EMAIL);
scopeBuilder = scopeBuilder.addPermission(FacebookPermissions.PUBLIC_PROFILE);

FacebookClient client = new DefaultFacebookClient(Version.LATEST);
loginDialogUrlString = client.getLoginDialogUrl(appId, redirectUrl, scopeBuilder);
}

public String getAuthUrl() {
return loginDialogUrlString;
}

public String obtainAccessToken(String verificationCode) {
FacebookClient client = new DefaultFacebookClient(Version.LATEST);
AccessToken accessToken = client.obtainUserAccessToken(appId, appSecret, redirectUrl, verificationCode);

return accessToken.getAccessToken();
}
}

Repository is available at: https://github.com/czetsuya/RESTFB-Demohttps://github.com/czetsuya/RESTFB-Demo

Hadoop MapReduce Demo

Versions:
  • Hadoop 3.1.1 
  • Java10
Set the following environment variables:
  • JAVA_HOME 
  • HADOOP_HOME

For Windows

Download Hadoop 3.1.1 binaries for windows at https://github.com/s911415/apache-hadoop-3.1.0-winutils. Extract in HADOOP_HOME\bin and make sure to override the existing files.

For Ubuntu

$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys

The following instruction will install Hadoop as Pseudo-Distributed Operation

1.) Create the following folders:
HADOOP_HOME/tmp
HADOOP_HOME/tmp/dfs/data
HADOOP_HOME/tmp/dfs/name

2.) Set the following properties: core-site.xml and hdfs-site.xml
<property>
fs.defaultFS
hdfs://localhost:9001
</property>
<property>
</property>
core-site.xml
<property>
hadoop.tmp.dir
HADOOP_HOME/tmp
</property>
<property>
</property>
hdfs-site.xml
<property>
dfs.namenode.name.dir
file:///HADOOP_HOME/tmp/dfs/name
</property>
<property>
dfs.datanode.data.dir
file:///HADOOP_HOME/tmp/dfs/data
</property>

<property>
dfs.permissions
false
</property>
<property>
</property>
3.) Run hadoop namenode -format Don't forget the file:/// prefix in hdfs-site.xml for windows. Otherwise, the format will fail.

4.) Run HADOOP_HOME/sbin/start-dfs.xml.

5.) If all goes well, you can check the log for the web port in the console. In my case it's http://localhost:9870.


6.) You can now upload any file in the #4 URL.



Now let's try to create a project that will test our Hadoop setup. Or download an already existing one. For example this project: https://www.guru99.com/create-your-first-Hadoop-program.html. It has a nice explanation with it, so let's try. I've repackaged it into a pom project and uploaded at Github at https://github.com/czetsuya/Hadoop-MapReduce.
  1. Clone the repository. 
  2. Open the hdfs url from the #5 above, and create an input and output folder.
  3. In input folder, upload the file SalesJan2009 from the project's root folder. 
  4. Run Hadoop jar Hadoop-mapreduce-0.0.1-SNAPSHOT.jar /input /output. 
  5. Check the output from the URL and download the resulting file.

To run Hadoop as standalone, download and unpack it as is. Go to our projects folder, build using maven, then run the Hadoop command below:
>$HADOOP_HOME/bin/hadoop jar target/hadoop-mapreduce-0.0.1-SNAPSHOT.jar input output

input - is a directory that should contain the csv file
output - is a directory that will be created after launch. The output file will be save here.

The common cause of problems: 

  • Un-properly configured core-site or hdfs-site related to data and name node?
  • File / folder permission

References

  • https://www.guru99.com/create-your-first-hadoop-program.html
  • https://github.com/czetsuya/Hadoop-MapReduce
  • https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html#Standalone_Operation

Hibernate OGM for MongoDB

So lately I've been playing with Hibernate OGM MongoDB's latest version 5.4.0.Beta2 but I'm not able to run a demo project created from wildfly-javaee7-war archetype following the documentation.

Here are the changes I've made to make it run the Arquillian test:


public static Archive<?> createTestArchive() {
String manifest = Descriptors.create(ManifestDescriptor.class).attribute("Dependencies", "org.hibernate.ogm:5.4 services, org.hibernate.ogm.mongodb:5.4 services")
.exportAsString();

return ShrinkWrap.create(WebArchive.class, "test.war") //
.addClasses(Member.class, MemberRegistration.class, Resources.class) //
.addAsResource(new StringAsset(manifest), "META-INF/MANIFEST.MF") //
.addAsResource("META-INF/test-persistence.xml", "META-INF/persistence.xml") //
// doesn't work on this version
// .addAsResource("jboss-deployment-structure.xml", "WEB-INF/jboss-deployment-structure.xml") //
.addAsWebInfResource(EmptyAsset.INSTANCE, "beans.xml") //
// Deploy our test datasource
.addAsWebInfResource("test-ds.xml");
}

Notice that instead of using jboss-deployment-structure file, we use a manifest. Maybe it's a bug in the release.

You can download the complete source code from:

  • https://github.com/czetsuya/Hibernate-OGM-MongoDB-Demo

Apache Cassandra Clustering

This tutorial will help us configure an Apache Cassandra's ring with 2 nodes. It will not explain what Cassandra is, use Google for that.

There are actually not too many properties that we must update in order to set up the cluster. Note that in this particular example, we will be configuring 2 nodes: 1 seed and 1 client.

Configuration

The Seed

"The seed node designation has no purpose other than bootstrapping the gossip process for new nodes joining the cluster. Seed nodes are not a single point of failure, nor do they have any other special purpose in cluster operations beyond the bootstrapping of nodes."

Open and edit CASSANDRA_HOME/conf/cassandra.yaml
  • rpc_address - set to the IP address of the node
  • seed_provider / parameters / seeds - set to the ip address of the node
  • listen_address - set to the IP address of the node

Client Node

"All nodes in Cassandra are peers. A client read or write request can go to any node in the cluster. When a client connects to a node and issues a read or write request, that node serves as the coordinator for that particular client operation.
The job of the coordinator is to act as a proxy between the client application and the nodes (or replicas) that own the data being requested. The coordinator determines which nodes in the ring should get the request based on the cluster configured partitioner and replica placement strategy."

Open and edit CASSANDRA_HOME/conf/cassandra.yaml
  • rpc_address - set to the IP address of the node
  • seed_provider / parameters/seeds - set to the IP address of the seed node
  • listen_address - set to the IP address of the node
As you can see the only difference is the value of the seeds.

Now start the Cassandra instance on the seed node, followed by the client node. You should get the following log in the seed machine:
INFO  [HANDSHAKE-/192.168.0.44] 2018-08-02 10:53:24,412 OutboundTcpConnection.java:560 - Handshaking version with /192.168.0.44
INFO [GossipStage:1] 2018-08-02 10:53:25,421 Gossiper.java:1053 - Node /192.168.0.44 has restarted, now UP
INFO [GossipStage:1] 2018-08-02 10:53:25,431 StorageService.java:2292 - Node /192.168.0.44 state jump to NORMAL
INFO [GossipStage:1] 2018-08-02 10:53:25,441 TokenMetadata.java:479 - Updating topology for /192.168.0.44
INFO [GossipStage:1] 2018-08-02 10:53:25,442 TokenMetadata.java:479 - Updating topology for /192.168.0.44
INFO [HANDSHAKE-/192.168.0.44] 2018-08-02 10:53:25,472 OutboundTcpConnection.java:560 - Handshaking version with /192.168.0.44
INFO [RequestResponseStage-1] 2018-08-02 10:53:26,216 Gossiper.java:1019 - InetAddress /192.168.0.44 is now UP
WARN [GossipTasks:1] 2018-08-02 10:53:26,414 FailureDetector.java:288 - Not marking nodes down due to local pause of 79566127100 > 5000000000

Can you guess which IP is the seed?

Node:

References

Wildfly server provisioning elastic search integration

I'm interested and would like to evaluate the integration of Elasticsearch to hibernate-search. I'm using the Wildfly container, however, Wildfly's hibernate-search library is a bit outdated: 5.5.8. so I need to find a way to outdate the jars and that's what led me to WF's server-provisioning using feature pack which is well explained here https://docs.jboss.org/hibernate/stable/search/reference/en-US/html_single/#updating-wildfly-hibernatesearch-versions

As stated you need to add the lines below to your persistence.xml
<property name="jboss.as.jpa.providerModule" value="org.hibernate:5.3" />
<property name="wildfly.jpa.hibernate.search.module" value="org.hibernate.search.orm:5.10.3.Final" />

Then you need to create a file named server-provisioning.xml in your project's root folder:

<server-provisioning xmlns="urn:wildfly:server-provisioning:1.1" copy-module-artifacts="true">
<feature-packs>

<feature-pack
groupId="org.hibernate"
artifactId="hibernate-search-jbossmodules-orm"
version="5.10.3.Final"/>

<feature-pack groupId="org.hibernate"
artifactId="hibernate-search-jbossmodules-elasticsearch" version="5.10.3.Final" />

<feature-pack
groupId="org.wildfly"
artifactId="wildfly-feature-pack"
version="13.0.0.Final" />

</feature-packs>
</server-provisioning>
And finally in your pom.xml file add the plugin below:
<plugin>
<groupId>org.wildfly.build</groupId>
<artifactId>wildfly-server-provisioning-maven-plugin</artifactId>
<version>1.2.6.Final</version>
<executions>
<execution>
<id>server-provisioning</id>
<goals>
<goal>build</goal>
</goals>
<phase>compile</phase>
<configuration>
<config-file>server-provisioning.xml</config-file>
<server-name>wildfly-with-updated-hibernate-search</server-name>
</configuration>
</execution>
</executions>
</plugin>
It should create a new folder in your target's directory named wildfly-with-updated-hibernate-search. And you should re-configure this server for your needs: datasource, mail, cache, etc. Make sure that it contains the jar files inside modules folder. The setting above copy-module-artifacts="true" should do it, notice that in the hibernate-search documentation, this property is not initialized. Thus, I spent some hours how to obtain the jars (I even downloaded some :-)).
It works for a basic requirement, but I still found some errors though like:
Caused by: java.lang.NoClassDefFoundError: javax/persistence/TableGenerators
Which should be solved by adding:
<dependency>
<groupId>javax.persistence</groupId>
<artifactId>javax.persistence-api</artifactId>
<version>2.2</version>
</dependency>

But that does not solve the issue so I added the -Dee8.preview.mode=true parameter and that did the trick.

Well, you may just want to wait for the release of Wildfly14.

Changes for Elasticsearch

In your project dependency add:
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-search-elasticsearch</artifactId>
<version>5.10.3.Final</version>
</dependency>

<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>6.2.3</version>
</dependency>

Make some minor tweaks to persistence.xml
<property name="hibernate.search.default.indexmanager" value="elasticsearch" />
<property name="hibernate.search.default.elasticsearch.host" value="http://127.0.0.1:9200" />
<property name="hibernate.search.default.elasticsearch.index_schema_management_strategy" value="CREATE" />

Run elasticsearch in docker https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html. Make sure that the status of your elasticsearch server is green. See docker-compose.yml in the project mentioned below.

Run your application. You should be able to see logs in elasticsearch that verifieds the data posted.

You may want to check the complete code accessible at https://github.com/czetsuya/hibernate-search-demo. Switch to latest-hibernate-search branch.

Note:

  • There are 3 errors with the elasticsearch integration related to JSON.
  • If you want to try lucene, just modify the configuration in test-persistence.xml.


You may also want to check:

Run Wildfly and Postgresql in Docker

Docker is a great tool to simulate a development environment. Not only that but it also makes that environment portable by having a docker/docker-compose configuration file. And that is what this blog is all about.

We will launch a pre-configured docker environment. This environment is your typical web-app with database access. In addition to that, we will also include an image for logging, database, and keycloak in case we need an authentication server.

Let's go through the configuration files and start from the main docker compose config. This configuration is responsible for building and starting all our images in the correct order.

version: '3'
services:
postgres:
image: postgres:10
container_name: postgres
ports:
- "5432:5432"
environment:
- LC_ALL=C.UTF-8
- POSTGRES_DB=terawhars
- POSTGRES_USER=terawhars
- POSTGRES_PASSWORD=terawhars
- POSTGRES_PORT=5432
volumes:
- $PWD/input_files/import-postgres.sql:/docker-entrypoint-initdb.d/import-postgres.sql
- $PWD/output_files/postgres_data:/var/lib/postgresql/data
adminer:
image: adminer
container_name: adminer
depends_on:
- postgres
ports:
- 8081:8080
wildfly:
image: terawhars
container_name: wildfly
build: .
ports:
- "8080:8080"
- "9990:9990"
environment:
- DB_HOST=postgres
- DB_PORT=5432
- DB_NAME=terawhars
- DB_USER=terawhars
- DB_PASS=terawhars
- DS_NAME=TeraWHARSDS
- JNDI_NAME=java:jboss/datasources/TeraWHARSDS
depends_on:
- postgres
volumes:
- $PWD/output_files/logs:/opt/jboss/wildfly/standalone/log
- $PWD/output_files/terawharsdata:/opt/jboss/wildfly/terawharsdata
- jboss-conf:/opt/jboss/wildfly/standalone/configuration
keycloak:
image: jboss/keycloak:4.0.0.Final
container_name: keycloak
ports:
- "8083:8080"
environment:
- DB_VENDOR=POSTGRES
- DB_ADDR=postgres
- DB_DATABASE=terawhars
- DB_USER=terawhars
- DB_PASSWORD=terawhars
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=admin
depends_on:
- postgres
weblogs:
image: opencell/alpine-tailon
container_name: tailon
depends_on:
- wildfly
ports:
- 8082:8080
volumes:
- $PWD/output_files/logs:/logs/
volumes:
jboss-conf: {}
This configuration installs Postgresql, Adminer, Tailon, Wildfly, Keycloak. For a more detailed instruction on what each field means, please consult the Docker documentation, we're not here to teach that.

Things to take note of:

  • You need to create the folder specified in the volumes. It will copy the files from docker container to your local machine. For docker to do that, you need to share your drive (Windows).
  • Don't use the same local port for different images.
  • Notice that we use a single database for Keycloak and the Web app/Wildfly, while we can use another image that allows multiple databases, it's easier this way. The downside is all the tables will be created on a single database for both Keycloak and our Web app.
Most of the image we use only requires minimal configuration like in Postgres, we only need to define the database configuration. It could become more complicated if we need to do replication, etc. But I think our example is enough for this tutorial. Now the image that requires some more fiddling is our web app since we need to configure the data source and download the war file.
FROM jboss/wildfly:13.0.0.Final

LABEL com.terawhars.version="0.0.1-snapshot"
LABEL author="Edward P. Legaspi"
LABEL email="czetsuya@gmail.com"
LABEL vendor1="TeraWHARS"
LABEL com.terawhars.release-date="2018-07-24"

# Set Postgresql env variables
ENV DB_HOST postgres
ENV DB_PORT 5432
ENV DB_NAME terawhars
ENV DB_USER terawhars
ENV DB_PASS terawhars

ENV DS_NAME TeraWHARSDS
ENV JNDI_NAME java:jboss/datasources/TeraWHARSDS

USER root

ADD https://jdbc.postgresql.org/download/postgresql-42.2.4.jar /tmp/postgresql-42.2.4.jar

WORKDIR /tmp
COPY input_files/wildfly-command.sh ./
COPY input_files/module-install.cli ./
RUN sed -i -e 's/\r$//' ./wildfly-command.sh
RUN chmod +x ./wildfly-command.sh
RUN ./wildfly-command.sh \
&& rm -rf $JBOSS_HOME/standalone/configuration/standalone_xml_history/

# Download and deploy the war file
ADD https://github.com/czetsuya/javaee6-docker-web/releases/download/1.0.0/javaee6-webapp.war $JBOSS_HOME/standalone/deployments

# Create Wildfly admin user
RUN $JBOSS_HOME/bin/add-user.sh admin admin --silent

# Set the default command to run on boot
# This will boot WildFly in the standalone mode and bind to all interface
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0"]

What it does:

  • Download PostgreSQL driver
  • Setup data source
  • Download the war file in Wildfly's deployment directory
  • Create a default admin user
  • Note that for this particular example, we are using the standalone.xml WF configuration
Some other things to take note:
  • For PostgreSQL, the image we specified an SQL file that will be imported on startup. This will initialize our database.
  • We use a .env file to define the PWD variable that is not present in Windows.
To see how it works check out the complete code from https://github.com/czetsuya/Docker-Demo and then run:
>docker-compose up --build

It will take some time during the first run as it will download all the images locally first.

References:
I accept customization job for a minimum fee of $50, so don't hesitate to contact me when your lazy to try something :-)

How to download a file in Angular2

There are 2 ways I use to download a file with Angular2 or greater. For this example, we will use a Java REST service.

The first approach would be taking advantage of the HTTP download, where the Angular side will just call a URL that will initiate the download.

Here's the web service's code

// The download resource
@GET
@Path("/exportCSV")
Response exportCSV();

@Override
public Response exportCSV() throws IOException {
ResponseBuilder builder = Response.ok();
builder.entity(tallySheetApi.exportCSV(httpServletResponse));

return builder.build();
}

public void exportCSV(HttpServletResponse response) throws IOException {
// write result to csv file
response.addHeader("Access-Control-Allow-Origin", "*");
response.addHeader("Access-Control-Allow-Methods", "POST, GET, OPTIONS, PUT, DELETE, HEAD");
response.setContentType("text/csv");
response.addHeader("Content-disposition", "attachment;filename=\"animes.csv\"");

writeToCSV(response.getOutputStream(), listValuesHere);

response.flushBuffer();
}

private void writeToCSV(ServletOutputStream servletOutputStream, List<Anime> animes) throws IOException {
Writer writer = new BufferedWriter(new OutputStreamWriter(servletOutputStream));

for (Anime anime : animes) {
writer.write(anime.getTitle());
writer.write(CSV_DELIMITER);
writer.write(anime.getReleaseDate());
writer.write(CSV_DELIMITER);
writer.write(anime.getRating());

writer.write(System.getProperty("line.separator"));
}

writer.close();
}

In the Angular part, we need to call the method below that will redirect to the url we defined above.

exportCSV() {
window.location.href = this.apiUrl + this.RESOURCE_URL + '/exportCSV';
}

The problem with this approach is that we cannot send security header in the request. To solve that issue we need to update both our api and the way we handle the response in Angular.

public class ByteDto {

private String fileContent;

public String getFileContent() {
return fileContent;
}

public void setFileContent(String fileContent) {
this.fileContent = fileContent;
}

}

// The download resource
@GET
@Path("/exportCSV")
Response exportCSV();

@Override
public Response exportCSV() throws IOException {
ResponseBuilder builder = Response.ok();
builder.entity(tallySheetApi.exportCSV());

return builder.build();
}

public ByteDto exportCSV() throws IOException {
ByteArrayOutputStream baos = writeToCSV(listValuesHere);

ByteDto byteDto = new ByteDto();
byteDto.setFileContent(Base64.getEncoder().withoutPadding().encodeToString(baos.toByteArray()));

return byteDto;
}

private ByteArrayOutputStream writeToCSV(List<Anime> animes) throws IOException {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
Writer writer = new BufferedWriter(new OutputStreamWriter(baos));

for (Anime anime : animes) {
writer.write(anime.getTitle());
writer.write(CSV_DELIMITER);
writer.write(anime.getReleaseDate());
writer.write(CSV_DELIMITER);
writer.write(anime.getRating());

writer.write(System.getProperty("line.separator"));
}

writer.close();

return baos;
}
In this approach, we save the byte array in a DTO instead of writing it in the outputStream. Note that we needed to encode the byte array as Base64, otherwise we will have a problem during deserialization. It also requires some additional work on the Angular side.

exportCSV() {
this.http.get<any>( this.apiUrl + this.RESOURCE_URL + '/exportCSV', { params: this.params } ).subscribe( data => {
console.log( 'downloaded data=' + data.fileContent )
var blob = new Blob( [atob( data.fileContent )], { type: 'text/csv' } )
let filename = 'animes.csv'

if ( window.navigator && window.navigator.msSaveOrOpenBlob ) {
window.navigator.msSaveOrOpenBlob( blob, filename )
} else {
var a = document.createElement( 'a' )
a.href = URL.createObjectURL( blob )
a.download = filename
document.body.appendChild( a )
a.click()
document.body.removeChild( a )
}
} )
}
In this version we needed to decode the byte array from the response and use the msSaveOrOpenBlob method from windows.navigator object. We also need to create a tag on the fly to invoke the download.

Setting Up Elk and Pushing Relational Data Using Logstash JDBC Input Plugin and Secrets Keystore

Introduction

In this tutorial, we will go over the installation of Elasticsearch. We will also show you how to configure it to gather and visualize data from a database.

Logstash is an open source tool for collecting, parsing and storing logs/data for future use.

Kibana is a web interface that can be used to search and view the logs/data that Logstash has indexed. Both of these tools are based on Elasticsearch, which is used for storing logs.

Our Goal

The goal of this tutorial is to set up Logstash to gather records from a database and set up Kibana to create a visualization.

Our ELK stack setup has three main components:
  • Logstash: The server component of Logstash that processes database records
  • Elasticsearch: Stores all of the records
  • Kibana: Web interface for searching and visualizing logs.

Install Elasticsearch

Using the MSI Installer package. The package contains a graphical user interface (GUI) to guides you through the installation process.

Then double-click the downloaded file to launch the GUI. Within the first screen, select the deployment directories:

images/msi_installer/msi_installer_locations.png

Then select whether to install as a service or start Elasticsearch manually as needed. Choose install as a service:
For configuration, simply leave the default values:
images/msi_installer/msi_installer_configuration.png
Uncheck all plugins to not install any plugins:
images/msi_installer/msi_installer_plugins.png
After clicking the install button, Elasticsearch will be installed:
images/msi_installer/msi_installer_success.png
To check if the Elasticsearch is running, open command prompt and type "services.msc" and look for Elasticsearch. You should see the status that it is 'Running'.

Or simply download the zipped file from https://www.elastic.co/downloads/elasticsearch.

Install Kibana

You can download the Kibana https://www.elastic.co/downloads/kibana.

After downloading Kibana and unzipping the file you will see a folder structure as below.

The runnable file is located at bin\kibana.bat.

To test, start kibanat.bat and point your browser at http://localhost:5601 and you should see a web page similar below:


Install Logstash

You can download the Logstash https://www.elastic.co/products/logstash.

After downloading Logstash and unzipping the file you will see a folder structure as below.

The runnable file is located at bin\logstash.bat.

Inserting Data Into Logstash By Using Select Data from Database

Elastic.co has a good blog regarding this topic that you can visit https://www.elastic.co/blog/logstash-jdbc-input-plugin.

Secrets keystore

When you configure Logstash, you might need to specify sensitive settings or configuration, such as passwords. Rather than relying on file system permissions to protect these values, you can use the Logstash keystore to securely store secret values for use in configuration settings.

Create a keystore

To create a secrets keystore, use the create:

bin/logstash-keystore create

Add keys

To store sensitive values, such as authentication credentials, use the add command:

bin/logstash-keystore add PG_PWD

When prompted, enter a value for the key.

We could use keystore to store values jdbc_connection_string, jdbc_user, jdbc_password, etc.

For simplicity, an underscore was added for referencing the keys. See below for sample config file.


Let's say that a Postgresql table was changed after pushing the data to Elasticsearch. Those changes will not be present in Elasticsearch. To keep Elasticsearch updated we need to update it by running logstash with the configuration below. 



In this configuration we are running logstash every second, of course, you wouldn't do that :-) Normally we run per day, week, month, etc. Can be configure depending on your needs.

References:

Setup MySQL Database for Remote Access

Here are some useful guidelines in setting up a mysql server for remote access in Ubuntu.


  1. Install and configure mysql server.
    sudo apt-get update
    sudo apt-get install mysql-server
    mysql_secure_installation
    *Note in MySQL - it will ask to set the password but not in MariaDB
  2. Bind MySQL to the public IP where it is hosted by editing the file MySQL: /etc/conf/my.cnf or MariaDB: /etc/mysql/mariadb.conf.d/50-server.conf, the cnf file is sometimes pointing to another file - make sure to check that. Search for the line with "bind-address" string. Set the value to your IP address or comment the bind-address line.
  3. Make sure that your user has enough privilege to access the database remotely:
    create user 'lacus'@'localhost' identified by 'lacus';
    grant all privileges on *.* to 'lacus'@'localhost' <with grant option>;
    create user 'lacus'@'%' identified by 'lacus';
    grant all privileges on *.* to 'lacus'@'%' <with grant option>;
  4. Open port: 3306 in the firewall:
    sudo ufw allow 3306/tcp
    sudo service ufw restart

Hibernate - Get classes that referenced a given entity

There are times when we want to list the entities that referenced (via foreign keys) a given entity x. For example, when deleting entity x that is being referenced, it will throw a generic ConstraintViolationException. Oftentimes, we need to display this classes. This is how we do it:

/**
* Map of classes and classes and fields that contains the referenced class.
*/
private static Map<Class, Map<Class, List<Field>>> classReferences = new HashMap<>();

/**
* Determines a generic type of a field. For example: List<String> field should return String).
*/
public static Class getFieldGenericsType(Field field) {
if (field.getGenericType() instanceof ParameterizedType) {
ParameterizedType aType = (ParameterizedType) field.getGenericType();
Type[] fieldArgTypes = aType.getActualTypeArguments();

for (Type fieldArgType : fieldArgTypes) {
Class fieldArgClass = (Class) fieldArgType;
return fieldArgClass;
}
}

return null;
}

/**
* Gets all the declared fields of a given class.
**/
public static List<Field> getAllFields(List<Field> fields, Class<?> type) {
fields.addAll(Arrays.asList(type.getDeclaredFields()));

if (type.getSuperclass() != null) {
fields = getAllFields(fields, type.getSuperclass());
}

return fields;
}

/**
* Gets all the classes that referenced a given class.
*/
public static Map<Class, List<Field>> getReferencedClassesAndFieldsOfType(Class fieldClass) {

if (classReferences.containsKey(fieldClass)) {
return classReferences.get(fieldClass);
}

Class superClass = fieldClass.getSuperclass();

Map<Class, List<Field>> matchedFields = new HashMap<>();

Reflections reflections = new Reflections("com.broodcamp.model");
// gets all our entity classes in our project
Set<Class<? extends BaseEntity>> classes = reflections.getSubTypesOf(BaseEntity.class);

// loop thru
for (Class<? extends BaseEntity> clazz : classes) {
// we are not interested with either interface or abstract
if (clazz.isInterface() || Modifier.isAbstract(clazz.getModifiers())) {
continue;
}

// gets all the fields of a given class
List<Field> fields = getAllFields(new ArrayList<Field>(), clazz);

// loops thru the fields
for (Field field : fields) {

// we are not interested with transient field
if (field.isAnnotationPresent(Transient.class)) {
continue;
}

// filter the field or parametized field of type fieldClass
// this means it refer to our referenced class
if (field.getType() == fieldClass || (Collection.class.isAssignableFrom(field.getType()) && getFieldGenericsType(field) == fieldClass) || (superClass != null
&& (field.getType() == superClass || (Collection.class.isAssignableFrom(field.getType()) && getFieldGenericsType(field) == superClass)))) {

// add to map
if (!matchedFields.containsKey(clazz)) {
matchedFields.put(clazz, new ArrayList<>());
}
matchedFields.get(clazz).add(field);
}
}
}
classReferences.put(fieldClass, matchedFields);

return matchedFields;
}

How to install Keycloak adapter using full profile

Normally we need full profile when we are dealing with a messaging requirement. In our case we use ActiveMQ.

Wildfly's profile differences is well explained here: https://stackoverflow.com/questions/26342201/what-is-the-difference-between-standalone-full-and-standalonefull-ha

Now what if we want to install keycloak adapter using such profile. Here are the steps:


  1. Open adapter-install-offline.cli in WILDFLY_HOME/bin, we assume that you already extract the wildfly adapter zipped accordingly.
  2. Replace embed-server with the profile you need (I need full):
    >embed-server --server-config=standalone-full.xml
  3. Run : adapter-install-offline.cli
    >jboss-cli.bat --file=adapter-install-offline.cli
We should get the screenshot result:
As a bonus, we could configure eclipse to run using our desired profile.

How to encrypt and decrypt an object in and from a file in Java

There are times when we need to write something on a file, but doesn't want it to be readable as a plain text. In this case we can use any type of encryption mechanism, but what if we want to decrypt the encrypted file back and read its contents. For example a configuration file, etc.

Let's show some codes as usual:

public class CipherUtils {

public static final String CIPHER_MODE = "AES/CBC/PKCS5Padding";

private CipherUtils() {

}

public static void encode(Serializable object, String password, String path)
throws IOException, InvalidKeyException, NoSuchAlgorithmException, NoSuchPaddingException,
IllegalBlockSizeException, InvalidAlgorithmParameterException {
Cipher cipher = Cipher.getInstance(CIPHER_MODE);
cipher.init(Cipher.ENCRYPT_MODE, fromStringToAESkey(password), new IvParameterSpec(new byte[16]));

// read object from file
SealedObject sealedObject = new SealedObject(object, cipher);
FileOutputStream fos = new FileOutputStream(path);
CipherOutputStream cipherOutputStream = new CipherOutputStream(new BufferedOutputStream(fos), cipher);

ObjectOutputStream outputStream = new ObjectOutputStream(cipherOutputStream);
outputStream.writeObject(sealedObject);
outputStream.close();
fos.close();
}

public static Serializable decode(String password, String path)
throws NoSuchAlgorithmException, NoSuchPaddingException, InvalidKeyException, IOException,
ClassNotFoundException, IllegalBlockSizeException, BadPaddingException, InvalidAlgorithmParameterException {
Cipher cipher = Cipher.getInstance(CIPHER_MODE);

// write object to file
cipher.init(Cipher.DECRYPT_MODE, fromStringToAESkey(password), new IvParameterSpec(new byte[16]));
CipherInputStream cipherInputStream = new CipherInputStream(new BufferedInputStream(new FileInputStream(path)),
cipher);

ObjectInputStream inputStream = new ObjectInputStream(cipherInputStream);
SealedObject sealedObject = (SealedObject) inputStream.readObject();
Serializable serializeableObject = (Serializable) sealedObject.getObject(cipher);
inputStream.close();

return serializeableObject;
}

public static SecretKey fromStringToAESkey(String s) throws UnsupportedEncodingException {
// 128bit key need 16 byte
byte[] rawKey = new byte[16];
// if you don't specify the encoding you might get weird results
byte[] keyBytes = s.getBytes("UTF-8");
System.arraycopy(keyBytes, 0, rawKey, 0, keyBytes.length);

return new SecretKeySpec(rawKey, "AES");
}
}

This utility class can encrypt and decrypt any serialize-able object we have.

And here's how we use it:


public class CipherUtilsTest {

private Person person;

@Before
public void init() {
person = new Person();
person.setFirstname("Shirayuki");
person.setLastname("Hime");
person.setAge(18);
}

@Test
public void testEncodeDecode() {
try {
CipherUtils.encode(person, "shirayuki", "c://temp//cipher");
Person decodedPerson = (Person) CipherUtils.decode("shirayuki", "c://temp//cipher");
assertEquals(person.getAge(), decodedPerson.getAge());
assertEquals(person.getFirstname(), decodedPerson.getFirstname());
assertEquals(person.getLastname(), decodedPerson.getLastname());
} catch (InvalidKeyException | NoSuchAlgorithmException | NoSuchPaddingException | IllegalBlockSizeException
| IOException | ClassNotFoundException | BadPaddingException | InvalidAlgorithmParameterException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}

}

Hibernate Search Faceting

With the document available from Hibernate Search (https://docs.jboss.org/hibernate/stable/search/reference/en-US/html_single/#query-faceting), we should be able to implement a faceting example that returns the faceting field and its count. In the example from Hibernate Search, our entity is CD, we create a facet on label field, so it will group the CD by label, and count all of each occurrence. But what if we want more info like the artist, sales, etc.

Going back to the previous example from hibernate where we have the following entities (Book, Author, Review). Let's say we want to group the books by author, how should we achieve that?


  1. Annotate Author.id with @facet.
    @Facets({ @Facet, @Facet(name = "id_facet", forField = "id_for_facet", encoding = FacetEncodingType.STRING) })
    @Fields({
    @Field(name = "id_for_facet", analyze = Analyze.NO, bridge = @FieldBridge(impl = org.hibernate.search.bridge.builtin.IntegerBridge.class)) })
    @Id
    @Column(name = "id")
    @GeneratedValue
    private Integer id;
  2. Create a class that will hold the desired entity and the facet result.
    public class EntityFacet implements Facet {
    private final Facet delegate;
    private final T entity;

    public EntityFacet(Facet delegate, T entity) {
    this.delegate = delegate;
    this.entity = entity;
    }

    @Override
    public String getFacetingName() {
    return delegate.getFacetingName();
    }

    @Override
    public String getFieldName() {
    return delegate.getFieldName();
    }

    @Override
    public String getValue() {
    return delegate.getValue();
    }

    @Override
    public int getCount() {
    return delegate.getCount();
    }

    @Override
    public Query getFacetQuery() {
    return delegate.getFacetQuery();
    }

    public T getEntity() {
    return entity;
    }

    @Override
    public String toString() {
    return "EntityFacet [delegate=" + delegate + ", entity=" + entity + "]";
    }
    }
  3. Let's query the facet and the entity that contains more of the information we want. In this case the author. Note that we need to add the faceted field in the includePaths property in the @IndexedEmbedded annotation of the main entity (Book), or don't specify a field so all annotated fields are included.
    FullTextEntityManager fullTextEntityManager = Search.getFullTextEntityManager(em);
    QueryBuilder qb = fullTextEntityManager.getSearchFactory().buildQueryBuilder().forEntity(Book.class).get();

    org.apache.lucene.search.Query luceneQuery = qb.all().createQuery();
    FullTextQuery fullTextQuery = fullTextEntityManager.createFullTextQuery(luceneQuery, Book.class);

    // define the facet
    FacetingRequest authorFacet = qb.facet().name("authorIdFacet").onField("authors.id_facet").discrete()
    .orderedBy(FacetSortOrder.COUNT_DESC).includeZeroCounts(false).maxFacetCount(5).createFacetingRequest();

    // retrieve facet manager and apply faceting request
    FacetManager facetManager = fullTextQuery.getFacetManager();
    facetManager.enableFaceting(authorFacet);

    // retrieve the faceting results
    List<Facet> facets = facetManager.getFacets("authorIdFacet");

    // collect all the ids
    List<Integer> vcIds = facets.stream().map(p -> Integer.parseInt(p.getValue())).collect(Collectors.toList());
    // query all the Authors given the id we faceted above, I think multiLoad has
    // been introduced in HS 5.x
    List<Author> authors = fullTextEntityManager.unwrap(Session.class).byMultipleIds(Author.class).multiLoad(vcIds);

    // fill our container object with the facet and author entity
    List<EntityFacet<Author>> entityFacets = new ArrayList<>(facets.size());
    for (int i = 0; i < facets.size(); i++) {
    entityFacets.add(new EntityFacet<Author>(facets.get(i), authors.get(i)));
    }

    entityFacets.stream().forEach(System.out::println);
For code reference you may check this repository: https://github.com/czetsuya/hibernate-search-demo

Got a question? Don't hesitate to ask :-)