viernes, 12 de junio de 2015

...Docker and devicemapper`s thinpool in RHEL 7

I’ve been working with Docker this week, for an OpenShift v3 demo, and I’ve been struggling with storage for docker, so here are my notes, just in case anyonw needs the, or I do need them again.
Docker in RHEL7 is recommended to use devicemapper storage drive with thin provisioning. I was setting up some Vagrant boxes for my environment, and I was running into issues with pulling down of images never finishing, or errors while writing into the docker storage. It seems that my VM was created with a very small ammount of disc space for docker, so it could not properly run. This is how I diagnosed the problem and how I fixed it.
Kudos to Nick Strugnell for helping me out.

Diagnose

Once I did a docker pull, and got stuck, I needed to know what was the problem, so first thing, inspect LVM to see configuration.
LVM is Logical VOlume Manager and has 3 concepts:
  • PV (Physical Volume): This is the classic HDD
  • VG (Volume Group): This can span multiple Physical Volumes
  • LV (Logical Volume): This would be the volumes, directly usable by the apps.
For every type, there is a set of commands, easy to understand that help us:
  • pvs (Physical Volume Summary), pvcreate (create a Physical Volume), pvchange, pvck, pvdisplay, pvmove, pvremove, pvresize, pvscan
  • vgs (Volume Group summary), vgcfgbackup, vgchange, vgconvert, vgdisplay, vgextend, vgimportclone, vgmknodes, vgremove, vgsplit, vgcfgrestore, vgck, vgcreate, vgexport, vgimport, vgmerge, vgreduce, vgrename, vgscan
  • lvs (Logical Volume Summary), lvchange, lvcreate, lvextend, lvremove, lvresize, lvscan, lvconvert, lvdisplay, lvreduce, lvrename
I did a summary of my VM:
[root@ose3-helper ~]# pvs
  PV         VG         Fmt  Attr PSize PFree
  /dev/vda3  VolGroup00 lvm2 a--  9.78g    0

[root@ose3-helper ~]# vgs
  VG         #PV #LV #SN Attr   VSize VFree
  VolGroup00   1   3   0 wz--n- 9.78g    0

[root@ose3-helper ~]# lvs
  LV          VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  LogVol00    VolGroup00 -wi-ao----   7.97g
  LogVol01    VolGroup00 -wi-ao----   1.50g
  docker-pool VolGroup00 twi-aot-M- 256.00m             100.00 0.22
It looks like my docker-pool is full, and very small. So here is the problem.

Why is it using a docker-pool LV?

In RHEL 7 docker is configured to run with devicemapper, as seen here:
[root@ose3-helper ~]# cat /etc/sysconfig/docker-storage
DOCKER_STORAGE_OPTIONS=-s devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/VolGroup00-docker--pool

How can I configure devicemapper for docker?

In order to use dm.thinpooldev you must have an LVM thinpool available, the docker-storage-setup package will assist you in configuring LVM. However you must provision your host to fit one of these three scenarios :
  • Root filesystem on LVM with free space remaining on the volume group. Run docker-storage-setup with no additional configuration, it will allocate the remaining space for the thinpool.
  • A dedicated LVM volume group where you’d like to reate your thinpool
echo <<EOF > /etc/sysconfig/docker-storage-setup
VG=docker-vg
SETUP_LVM_THIN_POOL=yes
EOF
docker-storage-setup
  • A dedicated block device, which will be used to create a volume group and thinpool
cat <<EOF > /etc/sysconfig/docker-storage-setup
DEVS=/dev/vdc
VG=docker-vg
SETUP_LVM_THIN_POOL=yes
EOF
docker-storage-setup
Once complete you should have a thinpool named docker-pool and docker should be configured to use it in /etc/sysconfig/docker-storage.
# lvs
LV                  VG        Attr       LSize  Pool Origin Data%  Meta% Move Log Cpy%Sync Convert
docker-pool         docker-vg twi-a-tz-- 48.95g             0.00   0.44

# cat /etc/sysconfig/docker-storage
DOCKER_STORAGE_OPTIONS=--storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/openshift--vg-docker--pool

If you had previously used docker with loopback storage you should clean out /var/lib/docker This is a destructive operation and will delete all images and containers on the host.
systemctl stop docker
rm -rf /var/lib/docker/*
systemctl start docker

This topic is completelly taken from Erik Jacobs OSEv3 training. So kudos to him.

Solution

As I didn’t have enough free space in my VG and I couldn’t unmount LogVol00 to reduce the size what I did was:
  • Add a second drive to the KVM VM (With VirtManager, although virsh should work the same)
  • Add the PV
  • Resize the VG to consume the just added PV
  • Two options:
    • Resize the docker LV (easier)
    • Delete the docker LV and recreate it.
      • Stop docker
      • Delete /var/lib/docker/*
      • Delete the docker LV
      • rerun the docker-storage-setup to reconfigure the docker LV to have all the added space
      • Start docker

Add a second drive to the KVM VM

With Virt Manager, Just select the VM to where you want to add the drive, "open the terminal for the VM", press configuration (the light bulb), and click on "+ Add Hardware". Select the size, and VirtIO as the bus.

Add the PV

To see the name of the new disc, you can cat /proc/partitions:
[root@ose3-helper ~]# cat /proc/partitions
major minor  #blocks  name

 252        0   11534336 vda
 252        1       1024 vda1
 252        2     204800 vda2
 252        3   10278912 vda3
 253        0    1572864 dm-0
 253        1    8355840 dm-1
 253        2      32768 dm-2
 253        3     262144 dm-3
 253        4     262144 dm-4
 253        5   10485760 dm-5
 252       16    8388608 vdb
We can see that the disc I just added is vdb, so I will add a PV for it with pvcreate:
[root@ose3-helper ~]# pvcreate /dev/vdb
  Physical volume "/dev/vdb" successfully created
And list it with pvs:
[root@ose3-helper ~]# pvs
  PV         VG         Fmt  Attr PSize PFree
  /dev/vda3  VolGroup00 lvm2 a--  9.78g    0
  /dev/vdb              lvm2 ---  8.00g 8.00g

Resize the VG to consume the just added PV

Now I need to make the VG span to this PV, so I will vgextend (I will list before and after to see the changes):
[root@ose3-helper ~]# vgs
  VG         #PV #LV #SN Attr   VSize VFree
  VolGroup00   1   3   0 wz--n- 9.78g    0

[root@ose3-helper ~]# vgextend VolGroup00  /dev/vdb
  Volume group "VolGroup00" successfully extended

[root@ose3-helper ~]# vgs
  VG         #PV #LV #SN Attr   VSize  VFree
  VolGroup00   2   3   0 wz--n- 17.75g 7.97g
Now I see the 8 GB that I added, as Free.

Resize the docker LV

If you prefer just to extend the volume, this is the command:
[root@ose3-helper ~]# lvextend -l 100%FREE /dev/VolGroup00/docker-pool
  Size of logical volume VolGroup00/docker-pool_tdata changed from 480.00 MiB (15 extents) to 7.75 GiB (248 extents).
  Logical volume docker-pool successfully resized

[root@ose3-helper ~]# lvs
  LV          VG         Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  LogVol00    VolGroup00 -wi-ao---- 7.97g
  LogVol01    VolGroup00 -wi-ao---- 1.50g
  docker-pool VolGroup00 twi-a-t--- 7.75g             3.23   0.22

Delete the docker LV and recreate it

As I need to remove the docker LV so it can be recreated through the docker-storage-setup script, I need to stop the process, and remove what it was there:
[root@ose3-helper ~]# systemctl stop docker

[root@ose3-helper ~]# rm -rf /var/lib/docker/*
Now I will remove the docker LV so I can recreate it fully
[root@ose3-helper ~]# lvremove VolGroup00/docker-pool
Do you really want to remove active logical volume docker-pool? [y/n]: y
  Logical volume "docker-pool" successfully removed
And now I will recreate it with the script:
[root@ose3-helper ~]# echo <<EOF > /etc/sysconfig/docker-storage-setup
> SETUP_LVM_THIN_POOL=yes
> EOF

[root@ose3-helper ~]# docker-storage-setup
  Rounding up size to full physical extent 32.00 MiB
  Logical volume "docker-poolmeta" created.
  Logical volume "docker-pool" created.
  WARNING: Converting logical volume VolGroup00/docker-pool and VolGroup00/docker-poolmeta to pool's data and metadata volumes.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted VolGroup00/docker-pool to thin pool.
  Logical volume "docker-pool" changed.

[root@ose3-helper ~]# lvs
  LV          VG         Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  LogVol00    VolGroup00 -wi-ao---- 7.97g
  LogVol01    VolGroup00 -wi-ao---- 1.50g
  docker-pool VolGroup00 twi-a-t--- 4.94g             0.00   0.11

It seems like this second option, as it uses thin provisioning, it doesn’t assign the whole available space to the docker-pool.

miércoles, 10 de junio de 2015

...JBoss projects in docker containers with an external RDBMS

I’ve been recently asked this:
Our project recently switched to using RDBMS + JPA as the backend storage. As such, the distro ships with DDL that need to be installed in Wildfly/EAP. What would you recommend?
I think this is a very interesting question, so I’ll post the answer here. Of course, it’s open for discussion :-D
There are different options here, so I will summarize them.
When we talk about containers, one of the "advices" that are usually given is "1 process 1 container". As such, if you want to provide your application with an RDBMS you’ll end up with 2 processes (EAP/Wildfly and RDBMS), unless you use embedded H2, which is an in memory database, executed in the same process as the Application server.

Use H2 as RDBMS

H2 can be configured as an embedded in memory database server. This option only works if you don’t plan to use your application server (your projects really) in a clustered way, as data will be local to the application server.
When using H2 you can:
  • Let Hibernate create the schema for you. This option works when you only have to create a database schema, but not populate it with data. If you have to do that, you’ll have to provide a way of creating this master data once the container is run, and only for the first time. At the end, if this is the case, it would be a hack. Anyway, from my point of view, not a recommended approach for containers.
  • Precreate the schema using a DDL and also load the data. An option is to create the H2 database with the data at build time, and fed the application server with the H2 DB data files already created (schema + master data). This process of creating the database can be part of your build process. I personally don’t like it, but it works, and it seems an option for H2.

Use a full RDBMS (Mysql/Postgresql)

For this you need to have 2 containers, one for the EAP/Wildfly and another for the RDBMS. Again multiple options.
  • Care only for your container. You have to prepare your application server with the appropriate RDBMS driver and datasource configuration, and configure it to inject the values through ENV variables at container creation time. This option is good, but you leave the user with a lot of boilerplate to do, as he’ll have to create the RDBMS container and load the DDL (schema + data). From my point of view, an option only if you provide with good instructions on how to do it.
  • Plain docker orchestration (fig, docker-compose). Good option for single machine and to use and test your project. You have 2 containers (EAP/wildfly and RDBMS). Typically, you have to ensure that the RDBMS starts before the EAP/Wildfly process so the DDL is fully executed into the database. You can see an apiman example here. I like this approach, as you do both containers, and provide with a single command for the user to execute, but not very scalable, unless using docker-swarm.
  • Kubernetes, OpenShiftv3 (The Red Hat way :-D). Good option to let people try the project on a PaaS, once v3 is available. (Free of charge on the OSE Online version). You will need for this to create all the pods, deploymentConfiguration,etc…​ required by Openshift. From my point of view, it will be the option in the coming months
Any of the options will work, but there are some constrains either on the build process (for H2) or on the runtime order (for orchestration). I like more the orchestrated containers option, although I have to admit that H2 is much easier and probably integrates better with the development process. On the other hand, the other options help you craft your project in such a way where you’ll be testing things like installation process, authentication, HA, …​ in a very early stage.
So in summary, a matter of taste.
Examples from the JBoss projects official site:
I HOPE IT HELPS

lunes, 1 de junio de 2015

...use Infinispan caches with SwitchYard

Sometimes, when developing SwitchYard services you might need to develop certain functionality where a cache could probably provide you with great benefit, things like:
  • Clustered configuration
  • Clustered storage of information
  • Clustered synchronization service
We are lucky that we can use Infinispan cache with SwitchYard, as it is in the application server, and if you are using FSW you are entitled to use it for your applications, and you do not need an additional entitlement for JDG.
Here, I’m going to explain very briefly what are the parts you need to take into ocnsideration in order to make use of Infinispan. The rest, what to do with the cache, then falls down on your side.

Configuration

Wildfly and EAP brings into their configuration the infinispan subsystem, where you can define you own cache containers. A cache container is a logical grouping of caches, that will be registered and accesible through JNDI for your application to use. There are multiple configuration that you can set per container, and per cache in a container, and you should check Infinispan configuration for all the available options, but the section you can/should configure is like this:
<subsystem xmlns="urn:jboss:domain:infinispan:1.4">
   ....
   <cache-container name="switchyard" default-cache="default" start="EAGER">
       <transport lock-timeout="60000"/>
       <replicated-cache name="default" mode="SYNC" start="EAGER" batching="true">
           <locking isolation="REPEATABLE_READ"/>
       </replicated-cache>
   </cache-container>
   <cache-container name="mycustomcache" default-cache="cacheA" start="EAGER">
       <transport lock-timeout="60000"/>
       <distributed-cache name="cacheA" l1-lifespan="1000" mode="ASYNC" batching="true">
           <eviction strategy="LRU" max-entries="1000"/>
       </distributed-cache>
       <distributed-cache name="cacheB" l1-lifespan="0" mode="ASYNC" batching="true">
           <eviction strategy="LRU" max-entries="10000"/>
           <file-store/>
       </distributed-cache>
   </cache-container>
</subsystem>
Keep in mind, that you will be required to start the server with a -ha profile to have replication and jgroups started, otherwise you will only have local caches.

Usage

The first thing you need to do in your application is to inject the CacheContainer. As the CacheContainer is registered in JNDI, it can easily be injected as a Resource in the java:jboss/infinispan/container/CACHE_CONTAINER name.
@Resource(lookup = "java:jboss/infinispan/container/switchyard")
private CacheContainer container;
Once you have the cache container, you need the concrete cache for your use. In the example configuration above, there are 2 caches defined (cacheA, cacheB). You can get a reference to the cache through the CacheManager. This can be done once, if you set your component as ApplicationScoped, or every time, or using a Singleton, or any other pattern.
private Cache<String, String> cache;
...
this.cache = this.container.getCache();
And now you can use your cache to store/retrieve information.
cache.put(KEY, value);
cache.putIfAbsent(KEY, value, 10L, TimeUnit.SECONDS);
cache.get(KEY);
cache.remove(KEY);
....
Check out the complete Infinispan documentation or the API.
Remember to check the version of Infinispan for the application server you are using. If FSW, this is 5.2.7.Final.
Check out some sample application.

jueves, 12 de febrero de 2015

...using jolokia to monitor/manage SwitchYard

Install jolokia

Get the latest jolokia war file from their website, rename it to jolokia.war and deploy it into the server.

Get a list of all SwitchYard MBeans

All SwitchYard MBeans are registered under the org.switchyard.admin JMX domain name, as per thedocumentation. So we can get a list of what we have:
http://localhost:8080/jolokia/list/org.switchyard.admin

or a description of an MBean:
http://localhost:8080/jolokia/list/org.switchyard.admin/name=%22_OrderService_soap_1%22,service=%22%7Burn:switchyard-quickstart:bean-service:0.1.0%7DOrderService%22,type=Binding



As it is mentioned on the documentation, there are different types of MBeans:
  • Application: Management interface for a SwitchYard application.
  • Service: Management interface for a composite service in a SwitchYard application. One MBean is registered per composite service.
  • Reference: Management interface for a composite reference in a SwitchYard application. One MBean is registered per composite reference.
  • Binding: Management interface for a gateway binding attached to a composite service or reference. One MBean is registered per binding instance on an application’s composite services and references.
  • ComponentService: Management interface for a component service in a SwitchYard application. One MBean is registered per component service.
  • ComponentReference: Management interface for a component reference in a SwitchYard application. One MBean is registered per component reference.
  • Transformer: Management interface for a transformer in a SwitchYard application. One MBean is registered per transformer.
  • Validator: Management interface for a validator in a SwitchYard application. One MBean is registered per validator.
  • Throttling: Management interface for throttling a service in a SwitchYard application. One ThrottlingMBean is registered per composite service instance.
There are two additional MBean objects, that are superclasses, that define custom behavior.
  • Lifecycle: Supertype of BindingMXBean which provides operations related to lifecycle control for service and reference bindings.
  • Metris: Supertype of multiple MBeans providing message metrics information.

Starting/Stopping bindings

As service and reference bindings extends the Lifecycle MXBean, we can start or stop a binding, and know in what state they are:
  • Check the state
http://localhost:8080/jolokia/read/org.switchyard.admin:name=%22_OrderService_soap_1%22,service=%22%7Burn:switchyard-quickstart:bean-service:0.1.0%7DOrderService%22,type=Binding/State

  • Stop the binding
http://localhost:8080/jolokia/exec/org.switchyard.admin:name=%22_OrderService_soap_1%22,service=%22%7Burn:switchyard-quickstart:bean-service:0.1.0%7DOrderService%22,type=Binding/start

  • Check the state
http://localhost:8080/jolokia/read/org.switchyard.admin:name=%22_OrderService_soap_1%22,service=%22%7Burn:switchyard-quickstart:bean-service:0.1.0%7DOrderService%22,type=Binding/State

  • Start the binding
http://localhost:8080/jolokia/exec/org.switchyard.admin:name=%22_OrderService_soap_1%22,service=%22%7Burn:switchyard-quickstart:bean-service:0.1.0%7DOrderService%22,type=Binding/start

  • Check the state
http://localhost:8080/jolokia/read/org.switchyard.admin:name=%22_OrderService_soap_1%22,service=%22%7Burn:switchyard-quickstart:bean-service:0.1.0%7DOrderService%22,type=Binding/State

Geting metrics

If you want to get metrics, it is very simple, the only thing is that you need to know which metrics are worth for you, as every component, composite and binding provides with many metrics. Once you know what information you need, you can use jolokia to get the information, and maybe use that information to feed an ElasticSearch or InfluxDB database, and use Kibana/Graphana to view the information in a graphical way, and explore this information. Also RTGov is available.
  • Get all the information available for a binding
http://localhost:8080/jolokia/read/org.switchyard.admin:name=%22_OrderService_soap_1%22,service=%22%7Burn:switchyard-quickstart:bean-service:0.1.0%7DOrderService%22,type=Binding

  • Get the TotalCount for a binding
http://localhost:8080/jolokia/read/org.switchyard.admin:name=%22_OrderService_soap_1%22,service=%22%7Burn:switchyard-quickstart:bean-service:0.1.0%7DOrderService%22,type=Binding/TotalCount

Getting metrics from multiple MBeans

You might want to get some metrics for more than one MBean. You can use wildcards for this, and knowing which types of MBeans and information you want is very easy.
http://localhost:8080/jolokia/read/org.switchyard.admin:name=*,service=*,type=Binding/MinProcessingTime

  • More complex pattern
http://localhost:8080/jolokia/read/org.switchyard.admin:name=%22*soap*%22,service=*,type=Binding/MinProcessingTime

Search the MBeans you care for

When you have many apps deployed, you might not know which MBeans are there, and their ObjectNames. You can search for them:
http://localhost:8080/jolokia/search/org.switchyard.admin:type=Binding,*

Demo

If you want to test this, I have created a Dockerfile that you can use right away, based on the latest SwitchYard image. It is available here.
You just need to get this file, and build the image:
curl https://raw.githubusercontent.com/jorgemoralespou/fsw-demo/master/monitoring-with-jolokia/Dockerfile -o Dockerfile
docker build --rm -t "switchyard-with-jolokia"
And then run it:
docker run -it --rm -p 8080:8080 -p 9990:9990 switchyard-with-jolokia