How to create an encrypted USB stick

Creating an encrypted USB stick under GNU/Linux is fairly ease. First lets install required packages and erase everything from the stick

$ export USB_STICK="/dev/sdb"
$ apt-get install parted cryptsetup-bin
$ shred -n 10 -v -z "${USB_STICK}"

This process can take some time depending on the stick’s size. After it’s finished, we can create a partition table

$ parted -s -a optimal "${USB_STICK}" -- mklabel msdos mkpart primary ext2 1 -1

Finally we can create an AES encrypted partition

$ export ENCRYPTED_PART="/dev/sdb1"
$ export ENCRYPTED_PART_NAME="cryptostick"
$ cryptsetup --verify-passphrase luksFormat "${ENCRYPTED_PART}" \
	--cipher aes --key-size 256 --hash sha256
$ cryptsetup luksOpen "${ENCRYPTED_PART}" "${ENCRYPTED_PART_NAME}"
$ mkfs.ext4 "/dev/mapper/${ENCRYPTED_PART_NAME}"
$ cryptsetup close "${ENCRYPTED_PART_NAME}"

Now our encrypted stick is ready to use!

$ mkdir "/mnt/${ENCRYPTED_PART_NAME}"
$ cryptsetup luksOpen "${ENCRYPTED_PART}" "${ENCRYPTED_PART_NAME}"
$ mount "/dev/mapper/${ENCRYPTED_PART_NAME}" "/mnt/${ENCRYPTED_PART_NAME}"
$ echo test > "/mnt/${ENCRYPTED_PART_NAME}/file.txt"
$ umount "/mnt/${ENCRYPTED_PART_NAME}"
$ cryptsetup close "${ENCRYPTED_PART_NAME}"

Debugging HTTP traffic like a boss

One can simply use tcpdump (or Wireshark):

$ tcpdump -ttt -A -q -l 'tcp port 80 or tcp port 443'

If you are looking for more convenient way of viewing HTTP traffic you can use following quick n’ dirty python script, which requires only python-flask and python-requests libraries

# -*- coding: utf8 -*-

import requests
from flask import Flask
from flask import Response
from flask import request

app = Flask(__name__)

@app.route('/<path:url>', methods=['POST', 'GET', 'PUT', 'DELETE'])
def proxy(url):

    def print_dict(direction, data):
        for (key, val) in data.iteritems():
            print("{0} {1}: {2}".format(direction, key.title(), val))

    print("> {0} {1}".format(request.method, url))
    print_dict('>', request.form)

    req = requests.request(request.method, url, data=request.form)

    print("< Status code {0}".format(req.status_code))
    print_dict('<', req.headers)

    return Response(req.content), req.status_code

if __name__ == '__main__':

It will forward all requests to URL specified as a path, e.g.

$ curl

will forward GET request to and display the following debug output

< Status code 200
< Content-Length: 31
< Server: nginx
< Connection: keep-alive
< Access-Control-Allow-Credentials: true
< Date: Mon, 11 May 2015 21:34:17 GMT
< Access-Control-Allow-Origin: *
< Content-Type: application/json
  "origin": ""

Of course this script is buggy as hell - it cannot even forward GET parameters…

mitmproxy to the rescue!

mitmproxy in an interactive console program that allows traffic flows to be intercepted, inspected, modified and replayed. After installation bring it up

$ mitmproxy

and configure proxy for your client, e.g.

$ curl -x http://localhost:8080

You should see a nice list of all your requests in mitmproxy window

Requests list

after selecting a request press Enter to view the details


and Tab to navigate between Request and Response


It’s worth mentioning, that mitmproxy can also work in transparent mode.

Other tools

I’ve also used following tools while playing with HTTP traffic:

  • Burp Suite - free edition includes Burp Proxy for traffic monitoring,
  • Fiddler - web debugging proxy written in .NET (it should work on GNU/Linux too!).

Fun with kexec

What is kexec?

Simply speaking kexec is a mechanism in Linux kernel which allows you to quickly boot to a new kernel from the currently running one. It’s like a reboot but it skips the entire bootloader stage. There is no hardware initialization. No system firmware. Basically we get extremely fast reboots! Believe me or not, but booting up an enterprise-class machine can take a few minutes…

First steps

First make sure that you have kexec system call enabled in your running kernel:

$ grep CONFIG_KEXEC /boot/config-`uname -r`

You also need to install kexec-tools package on your system. If you are using Debian GNU/Linux (or some derivative):

$ apt-get install kexec-tools

Alternatively you can build kexec-tools from source:

$ wget -q
$ tar xvpfz kexec-tools.tar.gz
$ cd kexec-tools-VERSION
$ ./configure
configure: creating ./config.status
config.status: creating Makefile
config.status: creating include/config.h
$ make && make install

Now let’s assume that we want to test our freshly built kernel:

$ uname -a
Linux testvm 3.2.0-4-amd64 #1 SMP Debian 3.2.65-1+deb7u2 x86_64 GNU/Linux
$ kexec --load /boot/vmlinuz-4.0.0-rc1 --initrd=/boot/initrd.img-4.0.0-rc1 --reuse-cmdline
$ kexec --exec
$ uname -a
Linux testvm 4.0.0-rc1 #1 SMP Mon Mar 2 21:21:48 CET 2015 x86_64 GNU/Linux

Voilà! We have kernel 4.0.0-rc1 up and running!

Why would I need this?

Imagine, that you need to install Debian GNU/Linux on remote server running Not You Favorite Linux Distribution and having only serial console connected to it (let’s assume, that you have kexec-tools installed):

$ export DEBIAN_VERSION="wheezy"
$ export DEBIAN_ARCH="amd64"
$ export DEBIAN_MIRROR=""
$ export NETBOOT_URL="${DEBIAN_MIRROR}/debian/dists/${DEBIAN_VERSION}/main/installer-${DEBIAN_ARCH}/current/images"
$ wget -q "${NETBOOT_URL}/netboot/netboot.tar.gz"
$ wget -q -O - "${NETBOOT_URL}/SHA256SUMS" | grep "netboot/netboot.tar.gz"
af54dcbd0866b0093693bd62c38dbd1c9af106aed2b34a415db2a3d51856e960  ./netboot/netboot.tar.gz
$ sha256sum netboot.tar.gz # always check checksums ;)
af54dcbd0866b0093693bd62c38dbd1c9af106aed2b34a415db2a3d51856e960  netboot.tar.gz
$ tar xfz netboot.tar.gz
$ kexec --load debian-installer/amd64/linux \
		--initrd=debian-installer/amd64/initrd.gz \
		--command-line="console=ttyS0 priority=low"
$ kexec --exec
[  109.165007] kexec: Starting new kernel

After few seconds you should get Debian installer screen:

Debian GNU/Linux installer screen

You can also boot some ISO images using GRUB4DOS:

$ kexec --load /path/to/grub.exe \
		--initrd=/path/to/image.iso \
		--command-line="--config-file=debug on; map --mem (rd)+1 (0xff); map --hook; chainloader (0xff)"
$ kexec --exec

I’ve tested it with following ISOs:

Artifactory - uploading 3rd party artifacts with Maven

Uploading artifacts to Artifactory via web interface is really clunky… Let’s script it!




[[ -z $DEBUG ]] || set -x

umask 0077

if [ $# -lt 4 ]; then
    echo "usage: $( basename "$0" ) groupId artifactId version filename"
    exit 1


TEMP_SETTINGS="$( mktemp --suffix=.xml )"

trap 'rm -rf ${TEMP_SETTINGS}; exit' EXIT INT TERM

cat <<EOF > "${TEMP_SETTINGS}"
<?xml version="1.0" encoding="UTF-8"?>
<settings xmlns=""

mvn -e -s "${TEMP_SETTINGS}" deploy:deploy-file \
    -Dpackaging="${PACKAGING}" \
    -DrepositoryId="${ARTIFACTORY_REPO}" \
    -DgroupId="${GROUP_ID}" \
    -DartifactId="${ARTIFACT_ID}" \
    -Dversion="${VERSION}" \

Oracle - LDAP over SSL without OAS

First, you need to install DBMS_LDAP package to use LDAP on Oracle database at all:


then install and configure stunnel:

$ apt-get install stunnel
$ cat <<EOF > /etc/stunnel/ldap.conf
client = yes

accept  =
connect =
$ sed -i.bak 's/ENABLED=0/ENABLED=1/' /etc/default/stunnel4 
$ invoke-rc.d stunnel4 restart

That’s it! Let’s test our configuration:


    LDAPSession DBMS_LDAP.session;
    DBMS_LDAP.use_exception := TRUE;
    LDAPSession := DBMS_LDAP.init( 
        hostname => '', 
        portnum  => 389
    LDAPResult  := DBMS_LDAP.simple_bind_s(
        ld     => LDAPSession, 
        dn     => 'cn=user,dc=example,dc=com', 
        passwd => 'password'
    DBMS_Output.put_line('User authenticated!');
    LDAPResult  := DBMS_LDAP.unbind_s(LDAPSession);

Hint: for testing purposes you can also use socat:

$ socat TCP-LISTEN:389,reuseaddr,fork,verify=0

Debian amd64 i Juniper VPN

Poniżej znajduję się instrukcja uruchomienia klienta VPN bez 32-bitowej Javy w przeglądarce.

  1. Na początek pobieramy JARa z klientem - po zalogowaniu się do interfejsu www VPNa pobieramy plik /dana-cached/nc/ncLinuxApp.jar

  2. Rozpokowujemy JARa i nadajemy odpowiednie uprawnienia plikom:

     $ mkdir -p ~/.juniper_networks
     $ unzip -d ~/.juniper_networks/network_connect ncLinuxApp.jar
     $ chown root: ~/.juniper_networks/network_connect/ncsvc
     $ chmod 6711 ~/.juniper_networks/network_connect/ncsvc
     $ chmod 744 ~/.juniper_networks/network_connect/ncdiag
  3. Pobieramy certyfikat serwera:

     $ export VPN_SERVER=""
     $ export VPN_PORT="443"
     $ openssl x509 -in <(openssl s_client -connect "${VPN_SERVER}:${VPN_PORT}" -prexit 2>/dev/null) \
                   -outform der -out ~/.juniper_networks/network_connect/${VPN_SERVER}.der
  4. Instalujemy niezbędne pakiety:

     $ dpkg --add-architecture i386 && apt-get update
     $ apt-get install libc6:i386 zlib1g:i386
  5. Uruchamiamy klienta:

     $ ~/.juniper_networks/network_connect/ncsvc -h $VPN_SERVER \
                                                -f ~/.juniper_networks/network_connect/${VPN_SERVER}.der \
                                                -u USERNAME -r REALM

    gdzie USERNAME to nazwa użytkownika a REALM to “Źródło autoryzacji” wybierane podczas logowania

Debian and PECL

Building PECL packages for Debian GNU/Linux is as easy as:

$ apt-get install dh-make-php
$ dh-make-pecl --depends somelib --build-depends somelib-dev module
$ cd php-module-1.0.0 && dpkg-buildpackage -rfakeroot

Problemy z Subversion

W pewnym momencie, w dość dużym repozytorium (~300MB) wystawionym przez Apache (mod_dav) pojawił się problem:

$ svn up
svn: Decompression of svndiff data failed

Rzut oka w logi serwera:

[error] Provider encountered an error while streaming a REPORT response.  [500, #0]
[error] A failure occurred while driving the update report editor  [500, #104]
[error] Error writing base64 data: Connection reset by peer  [500, #104]

Po uruchomieniu svnadmin verify okazało się, że coś jest nie tak z rewizją 413:

$ svnadmin verify /var/repos/name
* Zweryfikowano wersję 0.
* Zweryfikowano wersję 1.
* Zweryfikowano wersję 411.
* Zweryfikowano wersję 412.
svnadmin: Decompression of svndiff data failed

Na szczęście istnieje fsfverify!

$ ./ /var/repos/name/db/revs/413
NodeRev Id: 0.0.r413/12323283
type: dir
Error InvalidCompressedStream: Invalid compressed instr stream at offset 7116827 (Error -3 while decompressing: incorrect header check)
Try running with -f to fix the revision
$ ./ -f /var/repos/name/db/revs/413
NodeRev Id: 0.0.r413/12323283
type: dir
Copy 289710 bytes from offset 7116813
Write 289710 bytes at offset 7116675
Fixed? :-)  Re-run fsfsverify without the -f option

Na koniec cytat ze strony fsfverify:

I’ve found that most corruptions happen on systems configured to use a threading model whether via svnserve or mod_dav_svn. If you have a system that supports forking, use either Apache’s mpm_prefork or make sure svnserve is not running in threaded-mode. That substantially reduces the chance of corruption.

Przypadkowy lvremove

Co zrobić gdy omsknie nam się palec na klawiaturze i usuniemy nie ten wolumin co trzeba? Można go przywrócić z backupu:

$ grep remove /etc/lvm/archive/*
/etc/lvm/archive/ = "Created *before* executing 'lvremove /dev/vgname/lostone"
$ cp /etc/lvm/backup/vgname ~/vgname_lvm_backup
$ vgcfgrestore --verbose  -n lostone vgname
$ lvchange  -ay  /dev/vgname/lostone