Play with Testcontainers


What is “Testcontainers”?

Testcontainers is a Java 8 library that supports JUnit tests, providing lightweight, throwaway instances of common databases, Selenium web browsers, or anything else that can run in a Docker container. see So you don’t need any longer to create and customize Docker Compose to configure databases or anything else that you need for testing.

Let’s look at an example with Spring Boot, Docker and Testcontainers. Alle sources are available here:

Before you start you must be sure that a Docker or Docker machine is installed on the machine you are running your tests on.

Let’s initialize a Spring Boot service from

After that we’ll add the Testcontainers dependency to the generated project from the Spring initializer:

Testcontainers Adapters 

So, let’s say we need for our tests the following databases: PostgreSQL and MySQL, and also Selenium. The benefit for Testcontainers is that you don’t need to configure those  dependencies in the Docker Compose file. You just have to add the adapter you need. Testcontainers offers the following adapters:

  • vault
  • testcontainers
  • spock
  • selenium
  • pulsar
  • postgresql
  • oracle-xe
  • nginx
  • neo4j
  • mysql
  • mssqlserver
  • mockserver
  • mariadb
  • localstack
  • kafka
  • jdbc
  • influxdb
  • elasticsearch
  • dynalite
  • database-commons

For more information see:

PostgreSQL with Testcontainers 

So, I add PostgreSQL, MySQL and Selenium to the dependencies of the project:

Let’s write a test case using PostgreSQL: I use HikariCP as the JDBC driver because it is much faster than other drivers, see here

You see now, we just use the PostgreSQLContainer to configure the database we need for the test

and the performQuery method:

We can also use the @Rule in the JUnit test:

and then you get the connection from the PostgresSQLContainer:

MySQL with Testcontainers 

For MySQL it is the same procedure as with PostgreSQLContainer. You can use the MySQLContainer from org.testcontainers.mysql:

Here follows the test example to check the MySQL version:

Selenium with Testcontainers 

For the Selenium test, Testcontainers offers the BrowserWebDriverContainer. For the Selenium test you need the Selenium remote driver for the automated browsers.

and then you can the test your frontEnd project like:


Integration tests, i.e. tests that interact with external systems, are also required to fully cover all aspects of testing. If these systems are databases, Testcontainers can be used for them. The tests then run against a correct instance of the selected database, thus increasing confidence that the code tested in this way is actually working.
Apart from databases, Testcontainers can also be used for tests that require a running browser. Above all, the effort to install the appropriate browser locally is eliminated and thus offers a clear advantage.

Are you sure that your passwords are secure?

Progressive Secrets Management with HashiCorp and Spring Vault

Access data or credentials are environment-specific configuration settings whose management requires strict methods for safeguarding by their users. A storage of these credentials in various environments that is both gapless and secure as well as a confidential handling represents a challenge for access authorization and secure recording.
Modern methods simplify these processes and allow a secure management and storage of passwords in different environments for different apps. One of those methods is the use of HashiCorp Vault, which centrally saves and manages passwords by use of various mechanisms, such as key/value or dynamic processes.
Also, Spring offers a high-level abstraction of HashiCorp Vault – which is called “Spring Vault” –, which features a client-side support for existing Spring applications and thus simplifies the transition to HashiCorp Vault. Spring Vault offers REST interfaces for the access to the passwords that are saved in HashiCorp. A short explanation of the use of these two new technologies illustrates their benefits.

by Walid El Sayed Aly

All highly sensitive data that you want to protect are so-called secrets. A secret can be any element used for authentication or authorization, for instance, username, password, API token or TLS certificate. However, secrets are also sensitive or confidential data like credit card numbers, e-mails or tax identification numbers. We must know that secrets need different ways of saving, storing and managing than other kinds of data to fulfill their special safety requirements. There are new methods on the market that maximize the protection of sensitive data and their use.
Vault by HashiCorp saves, stores and manages passwords, certificates, API keys and other secrets in consideration of strict security criteria. With HashiCorp Vault, secrets can have certainly configurable lifecycles with an individually determined period of validity for passwords [1]. HashiCorp Vault was written in the programming language “Go” and works with the intelligence threading „Goroutine“. Goroutines are lightweight, concurring functions, which interact performantly, intelligently and faster than normal Java threading mechanisms. If, for example, a Goroutine is blocked, a new Goroutine will be generated automatically within the Go runtime.

HashiCorp Vault is an executable application that can be started with the command vault server. The Dev environment can be started with the parameter -dev: vault server -dev. Configuration files for the production environment are written in HCL, HashiCorp Configuration Language. HCL resembles the JSON format but has an additional function for adding commentaries.

Vault has a consistent interface for each secret and a secure access control. In addition, it records a detailed monitoring protocol. Vault can save any random key/value passwords and encrypts these data before saving them. It works with dynamic secrets. That is a safety mechanism that creates passwords only when required. Thus, the client can use a password just once. Also, leasing, revocation and renewal of passwords are possible. Vault renews passwords automatically and can also restore them.

Illustration. 1: A high-level Representation of the Vault Architecture [2]

The client can request a password for his application via an HTTP API (Ill. 1). Then Vault generates a dynamic password and sends it back to the client. Additionally, it logs that process. There is no clear separation between the internal and external components. The memory backend used by Vault does not have a trustworthy design. The safety barrier automatically encrypts all data leaving Vault by using the AES encryption (Advanced Encryption Standard) in the Galois counter modus (GCM). If the safety barrier reads data, the GCM authentication label will be verified in the decryption process to spot manipulations.

Vault supports Cloud backends such as Amazon Web Services (AWS) or database management systems like MySQL, PostgreSQL, Oracle Database or NoSQL databases like MongoDB. Each of theses components can be used as a Mount system. You can mount, unmount or remount single backends. With Mount, a backend will be mounted with every secret. Multiple backends of the same type can be made available by stating configuration different Mount points. At unmounting, the secret backend is not available anymore, and all its password data that had been physically saved will be deleted. At remounting, the backend is shifted to another backend without deleting the data. The different backends can be moved with vault mount/unmount/remount. All backend systems that have been mounted before can be viewed with the command vault mounts 1.2). Each single component has a detailed documentation on how to save passwords or sensible data.

Illustration. 2: Example of Vault Backend Systems

Vault Compared to Other Systems
There are more systems that aim at managing and saving secrets according to current requirements. Examples are, for instance:

  • Amazon Key Management Service (KMS) is a service by AWS that saves secrets in the hardware safety module HSM. In contrast to Vault, KMS focuses on how passwords are saved.
  • Keywhiz is an open-source project for the management of secrets. It offers a RESTfull API for password management. Clients can authenticate via certificates or cookies. The passwords are filed in memory with Keywhiz. In doing so, the data can be saved in a cache server. Keywhiz also offers a web UI for data management, although only for premium customers.
  • You can also manage passwords in a single way with Puppet. Single ways are symmetrical key algorithms that use just one key to achieve safe communication, privacy, integrity, and authentication. Puppet expects that a user’s passwords are encrypted in a format that also other systems accept. Passwords for Unix systems, for example, must be generated in the SHA1 format. Puppet is a simple method for saving secrets. Its safety level is not very high compared to Vault.

Spring Vault

Spring Vault offers an abstraction of Vault for the Spring framework [3]. At the time this article was written, Spring Vault was available in version 2.0. Spring Vault has been supporting HashiCorp Vault since version 1.0. Version 2.0 contains many new features, such as important changes like Vault repositories and a reactive Vault client. The Spring team uses the concept of Spring data repositories for the Vault repositories. Thereby, developers can use all CRUD functionalities with Vault passwords. The Vault repositories are activated with the annotation EnableVaultRepositories to activate Spring data as well. Repositories are meant to reduce boilerplate code and implement on various persistence memories. For reactive programming, Spring Vault offers the ReactiveVaultOperations, which is a central interface that specifies a fundamental set of Vault operations. Spring Vault’s reactive client support is based on modular authentication steps and Spring’s functional WebClient via ReactorNetty, which has a fully non-blocking, event-driven HTTP client.

ReactiveVaultTemplate encapsulates Vault’s main interaction, registers at the initialization in Vault and uses the token during the entire lifetime. Besides, in Spring Vault 2.0 it is possible to save the authentication steps, such as the token authentication or the sending of credentials to Vault. Of course, Spring Vault uses RestTemplate of Spring Web as the primary interface. Additionally, Spring Vault supports Apache HTTP Components, Java’s (HttpURLConnection), Netty and OkHttp 3 by square.


Two examples will show how to manage microservices in the Cloud with Spring Vault and HashiCorp Vault. A static example demonstrates how to save secrets as key/value on a physical hardware. The dynamic management is illustrated by a database as secret backend. We will use PostgreSQL DBMS as database. The project demo and the sources are available under GitHub [4].

You can start Vault in the console and should export the surrounding variables in the bash_profile:

First, we define the Vault address and give Vault a particular token. Then we define the path to a certificate that we have created locally earlier. Now we can start Vault:

We define our backend in the Vault configuration file. In this case Vault starts in-memory. In addition, we configure our listeners. These are the address and ports to which Vault needs to react. Vault supports exclusively the TCP protocol. Listing 1 shows an example of a Vault configuration file. tls_cert_file and tls_key_file are mandatory fields. Disable_mlock is an interesting configuration for the development environment. If disable_mlock is true, mlock will not get active. Mlock locks the virtual port range of the calling process in RAM.

  Listing 1: Vault Configuration File:

After starting the Vault, you can execute various configuration files that are necessary for the environment, like the roles and set-up of the databases, the AppId authentication or the Cloud Foundry. For our example we need a configuration of PostgreSQL. For this purpose, we first have to mount PostgreSQL. After that the connection information regarding the database is committed to Vault and finally the roles are defined.

In this manner you can also connect other database systems, such as Oracle Database or MySQL. Illustration 3 shows the first dynamic PostgreSQL secrets with Vault. Vault generates with that command dynamic access data to PostgreSQL for us, which can only be used with the defined token: vault read postgresql/creds/readonly. Now Vault is ready for use. At, you can get a Spring Boot initialization project, and a Maven or Gradle project will be produced with the necessary dependencies. For this example, the dependencies for Vault configuration, Spring WEB, JPA, Lombok and PostgreSQL were used.

Ill. 3: Dynamic Passwords with Vault and PostgreSQL 


Key/Value as Static Example

The application’s context configuration is defined in Java and derived from the abstract Spring class AbstractVaultConfiguration. For this you need to implement the two methods clientAuthentication and vaultEndpoint. You can also define sslConfiguration and create the paths for the trustStore certificates. Listing 2 shows a Vault configuration with Spring Vault.

Listing 2: Spring Vault Java Configuration:

Finally, you can initialize the Vault template from the context and already write, read and delete passwords as key/value in Vault (listing 3). Here, the passwords are saved unencrypted in Vault.

PostgreSQL Database as Dynamic Example

Right after the Vault server has been started and the configuration to PostgreSQL has been defined in Vault, you can dynamically commit the PostgreSQL access data to the application with the support of Vault and Spring. The application itself is not supposed to take care of the secrets any longer. In other words, the access data is not saved anymore in some property file on a machine, but Vault takes care of them in its backend. The application is meant to just commit the URL to the database. And of course, it needs to show Vault a correct token. It does not matter if the application is in the Cloud or not. There is only one place for the management of all secrets. In the dynamic VaultConfig file (listing 1), the DataSource is also defined additionally (listing 4).

Data Source Configuration:

The application also commits the user name and password as variables. To illustrate this, a customer entity has been created in PostgreSQL (listing 5).

Listing 5: Customer Entity:

Now you can access the database and modify the customer entity with the aid of Spring Vault.

The application first accesses the Vault backend to get the dynamic access data (listing 6). The application needs to give Vault the correct token. If the request was successful, you will get back a VaultResponseSupport with postgresql/creds/readonly. The application gets the access data with the response. These will be committed to the dynamic Vault configuration. Finally, the application gets access to PostgreSQL. The secrets are valid as long as the application is online. Whenever the application starts afresh, new secrets will be generated. No other application gets access to the database with the same secrets.

Listing 6: Dynamic PostgreSQL with Vault:

These two examples demonstrate that it is possible to manage passwords safely in the Cloud. However, it is of course a disadvantage that the Cloud services depend on Vault. Therefore, the Vault server must be available to one hundred percent. Vault offers a replication of the data with Vault Enterprise. There is a library of clusters with various Vault ties (besseres Wort für Knoten?). The entire communication between the primary and secondary clusters is encrypted by TLS sessions. There are additional features in the enterprise modus, like support of the hardware safety module HSM and a web interface for data management.


We all agree that we must put special care on the handling and securing of sensitive data like passwords or credit card data. However, the old methods of encrypting passwords or not even encrypting them but saving them locally, are completely outdated. These data should be hashed or managed with modern technologies. Applications as well as operations must not know the secrets. The use of Vault is easy and efficient and therefore all the more advisable for the management of highly sensitive data. The discussed examples show that you can manage passwords statically as well as dynamically with HashiCorp Vault. Spring Vault offers a reasonable abstraction of HashiCorp Vault and considerably simplifies the transition from existing Spring applications to HashiCorp Vault.

Links & Literature






A Good Team Player: Git in Combination with Other Systems

A Good Team Player – Git in Combination with Other Systems

A Powerful Team for an Efficient Version Management
Nowadays, software applications cannot subsist without version management. The majority of systems therefore use systems like SVN or CVS. Similarly, Git has become very popular due to its flexibility and many advantages, especially when it is linked with SVN and CVS features in the same project.
The following article illustrates with the aid of several examples how existing applications, which already use a version control system, can also be connected with Git and how they benefit from this arrangement.

by Walid El Sayed Aly
Git has become quite a serious competitor for all central version management systems (SVN, CVS and RCS) and the local ones, such as Mercurial and Bazaar. A comparison of the most important versioning systems of February 2010 [1] shows that Git already had a remarkable position in this field at that time because it received the highest score in the evaluation.
There are numerous papers and articles on the introduction of Git and the illustration of its usefulness. It has also been widely discussed why users have more advantages if they use Git in preference to SVN or CVS. Yet, somehow it has been overlooked that by pooling a central or local version management system like SVN “without git-svn” or CVS “without git-cvsimport” with Git in one and the same project, you can combine all their features and derive the greatest benefit from that.

Git Compared against a Central Version Management System
In contrast to central version management systems, a distributed version management system does not use a central repository. And it is precisely that characteristic that makes it powerful because the system does not depend on the network. Git belongs to these distributed version management systems. It reverts only to the network command in the course of comparing of branches on other repositories. Local Git data are always full-featured repositories for the projects in which they are applied. With Git, every user has his own repository that he can use for entering changes. Furthermore, Git is also an extremely fast system, and the replacement of changes can be hugely simplified because of Git’s independence from the network, e. g. at the creation of full-value clones on Github and Googlecode.
Despite its many merits, we do not want to place special emphasis on Git here. We rather want to shift our attention to the combination of Git with a versioning system and to the advantages that Git offers as a cloud solution.

Lifecycle Status of a File with Git
Image [1] shows the separate states of a file from the viewpoint of Git. There are basically four different states [2]: A file can be untracked or not versioned or unmodified. That means that the file has been versioned already but not yet changed. In the third state the file has already been modified, i.e. versioned. So the file has been versioned but is not yet in stage. The final status is staged. In that condition the file has been versioned, changed, is in stage but not yet committed.

Image 1: Lifecycle State of a File with Git

Git is an uncomplicated, very efficient system and as such especially convenient for users with a penchant for programming always and everywhere. Git owes its popularity particularly to its up-to-dateness and mobility since it allows developers unrestricted possibilities for programming. Git makes software development possible everywhere, on the beach, on the train or abroad – even when the local company network is unavailable. That is an enormous benefit for mobile users. Thus, Git facilitates productivity and offers tremendous economy of time. Projects can be finalized much quicker and more effectively, which not only saves time but also money.
Git offers the possibility to historicize all individual changes and upload them packaged and simultaneously to SVN or CVS. Yet, what do software developers do when they work on a SVN or CVS project on travels or weekends at home? They also want to historicize the changes they made while being away from their desks as backup. The obvious answer is: Use Git! Still – how does that work with SVN or CVS?

Integration of Git into SVN
The question is how to control the data historicization. Programmers who use an IDE like Eclipse, IntelliJ or Visual Studio for developing know that all these IDEs have a feature for the historicization of data changes. On the other hand, these features contain several shortcomings. For instance, historicization in the IDEs is only possible for a short period. It is also not possible to view or illustrate changes effectively like in a system. Another handicap is that the IDEs cannot build local branches or tags. As a result of all this, historicization cannot be completed appropriately with an IDE.
So, how can Git be connected with a SVN project? In contrast to central version management systems, distributed version management systems commit all changes locally. Subsequently, these changes can be committed in the cloud or network. The most obvious method to use Git with SVN is the process Git-SVN. The function is already included in Git because it is a standard method for the installation of Git as shell Console system for Linux or Mac OS. Regrettably, Git-SVN is missing in many Git-GUI systems as are many Git plugins for the IDEs. In order to combine an existing SVN project with Git, that project needs to be cloned with Git first. Git-SVN offers a possibility for that:

The option stdlayout includes that Git also downloads data from branches and tags. SVN does not know the concept Git/branch and Git/tags. As soon as that is done, you can continue to work with Git. Most Git functions, such as add, commit, branch, and so on, can be used during the process. If you want to commit the changes also in Git, you use commit, for example:

If you want to commit the changes afterwards in SVN, too:

The Pros and Cons of the Use of Git-SVN
Git-SVN is the perfect tool to start with Git without having to carry out radical changes of the organization. Unfortunately, the tool has rather many drawbacks. The committing and checking out of projects takes a lot of time, which is particularly adverse for comprehensive projects with a long stay in a SVN repository. Additionally, conflicts can arise at the merge. By way of comparison: There are more than 11,000 entries in for the search term git.svn problem. Besides, the tool works only in SVN projects. That means that it is impossible to use a CVS project parallel to Git with the Git-SVN tool.

The use of the tool also requires certain know-how because most of the job is done on the console since Git-SVN is not featured by many GUI tools and plugins. [3] offers a list of graphic front-ends and tools.
Scott Chacon has shown one of the biggest shortcomings in his book “Pro Git” [4], viz. that in the course of Git-SVN commits or at the use of git svn dcommit a subversion commit is made for each commit and the local Git is then rewritten to include a specific identifier. That means that all SHA-1 commit check sums of a programmer are changed. That shows that the simultaneous use of Git-based remote versions in a developer’s projects and a subversion server is not practical. If you look at the last commit you will see that the new Git-SVN ID has been added. The SHA check sum that originally started with 97031e5 at committing, now starts with 938b1a5. If you want to push to a Git server as well as to a subversion server, you have to push first to the subversion server (dcommit) because that move changes the commit data.

How to Integrate Git with CVS
All the pretty projects for which CVS is still in use can also be handled with Git. With the aid of the “git-cvsimport” tool, CVS projects can be imported in order to clone them with Git. Listing [1] shows how to initialize a CVS project with Git. The “A” of the “cvsimport” is optional but it helps to harmonize the historicization with Git and renders a more comprehensive Git “look”.

The first compilation still takes comparatively long. After the initialization, a “master” will be built from Listing [1]. There are a few configurations that are useful if you work with Git and CVS. Listing [2] shows these settings. Here, the configuration for the CVS modules, the CVS import and the root are defined so that they do not have to be entered again each time. To mirror the changes again in CVS, you use “git cvsexportcommit”.
The usage, however, presents some challenges for programmers. As with Git-SVN, it CVS works only in an explicitly CVS-capable project. GUI tools or plug-ins for Eclipse or IntelliJ do not exist unfortunately. Also, there can be difficulties after the cloning with the standard Git functions.

Recommendation for the Use of Git with Other Systems
After having evaluated the pros and cons for the combination of a SVN or CVS system with Git and the use of git-svn and svn-cvs-import, we will now take a look at my actual alternative solution. It is essentially a relatively simple idea with a highly efficient outcome, viz. the initialization of Git of any random SVN or CVS project without the use of further tools.
The key word in this context is “metadata”. Git has the repository “metadata git”. All projects or files that are linked to a version management system, see only these metadata. A subversion project, for instance, saves its history and the entire management in a directory called “svn”. Subversion controls the whole tree with that directory. So, what happens if you initialize Git within a subversion directory?

Initialized empty Git repository in /user/walid/svnProject/.git/
All of a sudden you have created a Git repository within a SVN directory. Now you can use all Git functionalities and the corresponding benefits. You can create branches and tags and commit data changes and retrace them without having to take SVN into account.
There is, however, a trick that you must apply before: Ignore the SVN metadata in Git! That is easily done by creating a data called gitignore in the main directory of the project and entering the metadata .svn. Thereupon Git will ignore the entire metadata of SVN. You can also create the remark .git/info/exclude.git/info/exclude. Or else, if you have installed Git already under Linux or Mac OS, Git will automatically create a file with the title “.gitignore_global” in the current work directory. There you can deposit all possible ignorable files. Illustration [2] shows an SVN project that was subsequently initialized with Git.

2: Structural representation of a Git-SVN project

According to this principle, you can operate all central management system projects with Git. A big advantage is that developers can always work independently from the network. They make full use of Git’s benefits and are thus free from the shortcomings of SVN-Git. No matter if you are at the beach, on the train, in Central Park or at the duck pond – you always have your own repository with you. A little disadvantage is that SVN does not realize these changes and that they are not saved centrally. Nevertheless, they do not get lost directly, since you can commit them subsequently to SVN and then save them centrally.
After having scrutinized Source Code, SVN, CVS and Git, let us now look at Doc as well. You can use Git also as a cloud solution to save and centrally manage files like Doc and Excel. You can forego Google Doc or Dropbox by creating a virtual server to use Git for the management of the documents, for example like that: First you build an empty repository in the remote server with

Then you clone that repository:

In this way you can always reproduce your files and manage them safely and centrally from any system. A very, very small drawback is that you have to commit and manage the changes yourself because there is no tool that batches them in the background. But there is also a trick for that. Instead of always working with “git add” and then “git commit” you can execute both demands in a single step. You just have to create an alias:

After that you can execute “git add” and “git commit” as follows:

You can use Git anywhere. It is fast, independent, modern and very safe. However, you need a bit of know-how to find your way around it at the start. But as soon as you get the hang of it, it is great fun to work with. It offers ease and economy of time in many areas. Although Git does not always guarantee a smooth operation, its many benefits counterbalance smaller weaknesses unquestionably.

Links & Literature
[4], S. 207

No NullPointerException again

How many bugs we get in the course of our lives as developers because of NullPointerExceptions! If you never want to get a NPE again, read this article.
NullPointerExceptions can be annoying for everyone trying to deliver his or her stuff. After teasing the feature, it is totally maddening when the quality guys report back a bug due to a NPE.
So how can we clear our code of a NullPointerException? Read further and you will find some solutions to choose from.

Use java.util.Optional with java 8

If you use Java 8 already, you can try to use java.util.Optional. It is a class that encapsulates an optional value.
The optional goals are that null checks are not required, there are no more NullPointerExceptions at run-time, and we can develop clean and neat APIs and no more boilerplate code.

– To create an optional object:

– To check if the object is empty and any value exists or not:

– To create a value with static API:

– The method for how to escape the NullPointerException dilemma is that you use the ofNUllAble API:

In this case we pass in a null reference, so we will not get the NullPointerException.

– The alternative case with orElse() / orElseGet() / orElseThrow:

Here you can give the object the choice that in case of null, it replaces the value object with the name SAM.
With orElseThrow() you throw an exception:

– To get the value:

– To filter the elements with filter():

The optional class in JDK 9 has been improved and the following changes have been made:

Allows execution of an action in a positive or negative case.

Converts the optional to a stream.

Allows linking of multiple calculations in an elegant way.

The purpose of optional is not to replace every single null reference in your codebase but rather to help design better APIs in which users can tell whether to expect an optional value. If you do not have the possibility to use JDK 8 or 9, you still have the option to use guava:

To add a dependency on Guava using Maven, use the following:

You can use Optional with Guava. The optional with Guava is an immutable object used to contain a not-null object. Optional object is used to represent null with absent value. This class has various utility methods to facilitate the code to handle values as available or not available instead of checking null values.

It offers methods such as: absent(), asSet(), get(), isPresent(), of(T reference) and orNull()

//Optional.fromNullable – allows passed parameter to be null.

//Optional.of – thro lPointerException if passed parameter is null

You will find more information in the API-Doc:

Vavr is an awesome, functional library for Java 8+ that provides persistent data types and functional control structures.

To add a dependency on Vavr using Maven, use the following:

In addition to the common met- hods, vavr also offers an Option. Option in vavr is a monadic container type which represents an optional value. Instances of Option are either an instance of some objects or none.
The main goal of Option is to eliminate null checks in our code by leveraging the Java type system.
Option is an object container in Vavr with a similar end goal like Optional in Java 8. Vavr’s Option implements Serializable, Iterable, and has a richer API.

Collection of useful Unix commands


– How to count all directories?
find . -maxdepth 1 -type d | wc -l

– How to search and replace in file?
find comet/appconf/* -type f -exec sed -i ’s/WhatYouSEARCHFor/WhatYouReplaceFor/g' {} \;
— escaped http: http:\/\/

– How to search and delete files?
find . -name "FILE-TO-FIND" -exec rm -rf {} \;

– How to search in file?
grep -rnw comet/appconf/* -e “WhatYouWantToSearch”
find . | xargs grep 'WhatYouWantToSearch' -sl

– How to add “new” line to the end of a file?
for file in $(find ./* -iname *.properties); do echo 'hello' >> $file; done

– How to copy fast files with windows?
rsync -auvr /Users/walid/Desktop/1/* /Users/walid/Desktop/2/

– How to sort Files by size?
du --max-depth=1 . | sort -n –r


– How can you display Linux users?
sudo cat /etc/passwd | cut -d":" -f1

– How to show the pid for certain ports?
Mac: lsof -n -i:8080
Linux: netstat | grep 8080

Why I chose HTTPie instead of cURL on the Command Line for HTTP APIs

HTTPie (pronounced aitch-tee-tee-pie) is a command line HTTP client. Its goal is to make CLI interaction with web services as human-friendly as possible.

cURL is a tool to transfer data from or to a server, using one of the supported protocols (DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET and TFTP). The command is designed to work without user interaction.

The biggest difference between cURL and HTTPie is in the response, which is automatically color-coded and JSON is formatted. These defaults make HTTPie very friendly to my tired developer eyes.

cURL POST Example:

HTTPie POST Example:

JSON APIs are common, so HTTPie assumes that’s what’s coming.

cURL GET Example:

HTTPie GET Example:

When the METHOD argument is omitted from the command, HTTPie defaults to either GET (with no request data) or POST (with request data).

You can find here more information about HTTPie.

Logging with Spring JDBC and Craftsman Spy

It was very interesting when I used the framework from Craftsman Spy. This Framework is very useful for the JDBC Logging with Spring JDBC.

Craftsman Spy is an open source and free framework for JDBC logging, it’s a JDBC driver implementation, you can download it from, and also you bind it in your local-Repository for Maven: how it’s work? I will explain it in Read More