Author Archives: antonyjepson

Linux Enthusiast, Computer Hacker

301: Blog has moved

As per my prior post, my new blog is now hosted at my personal domain: You can expect faster load times, more frequent postings, and a broader range of topics covered.

Hosting my own blog

Currently, my blog is served through WordPress. Visitors either enter the site through the redirect at or by web searches. While WordPress is a simple way to maintain a blog, at times I would like to have a bit more control over my content in the event WordPress disappears or my credentials are hacked.

What follows is an investigation in self-hosting alternatives.

My baseline requirements were as follows:

  • Hosted in the EU
  • Easy off-site back up
  • Embedding of images permitted
  • At least 32GB of space for content
  • Thorough documentation of the service used
  • Basic analytics
  • Light-weight and fast loading pages
  • Reduced susceptibility to DDOS attacks
  • $10.00 / mo maximum base price
  • Supported in mobile / desktop web

My stretch requirements were:

    • Replicated across the 7 continents (in the event a post becomes popular).
    • Moderated comments
    • A/B testing on post content
    • Encrypted access via SSL

Starting with the hosting infrastructure, I considered various options.


First was Amazon Elastic Compute Cloud, with which I am very familiar with and have used for years. Referencing the EC2 Instance comparison in the Ireland region chart, a small T2 instance came up to $166.44 annually when reserved upfront for the year. It didn’t come with storage, so a 32GB general purpose EBS for one year comes to $0.11 / GB-month * 12 months * 36GB = $47.52 annually. Additionally, I would need to create a snapshot every two weeks with 2 months of cumulative back ups which would cost at worst case $0.05 * 8 back ups * 12 months * 32 GB = $172.80 / annually. Network I/O for Amazon is sufficiently cheap (monitored at the 10 TB / month scale) so that will not be included in the calculation. The final cost comes to $387 annually = $32 / mo. If we reduced the scale of back ups, to say, twice a month, the final cost would come to $31 / mo. Clearly, this is a quite a bit above my target of $10.00/mo.

Next, I considered hosting it on Windows Azure. Looking at the virtual machine categories, the A1 instance seemed sufficient, costing $20.83 monthly. This tier comes with two disks, a 20GB operating system disk and a 70GB temporary storage disk. Evidently, the temporary storage disk would not be the best location for the blog in the event of termination or other failure.

While Windows Azure seemed tempting, I wanted to drop the price even further. The next choice was Digital Ocean – well known for their digital `droplets’. Unfortunately, the full pricing scale was behind a sign-in page. On the public facing side, $10 monthly would secure a droplet (instance) with 1GB memory, 1 core processor (very vague), 30GB SSD, and 1TB transfer. While this certainly seemed like the best option, I wanted to make sure I evaluated 5 options.

The fourth alternative considered is Google cloud platform. Costs can be reduced by using a custom machine type, in this case, I would opt for 2GB ($0.0.00361 / GB-hr) instance with 2 vCPUs ($0.02689 / vCPU-hr), bringing the total to 744 hr * [(2 vCPU * $0.02929 / vCPU-hr) + (2 GB * $0.00393 / GB-hr) = $45.38 / mo.

A great alternative is actually using a static site generator and hosting the website on Amazon S3. This means that there are no security updates to worry about. Unfortunately, this would require me to run the site generator locally on my computer and back up my blog manually. The cost for Amazon S3 in Europe is $0.0300 / GB. So 32GB would run me $0.96 GB / mo. I am changed per 1,000 for the GET requests and they run at $0.005 per 1,000. A scenario I used to evaluate the price was if one of my posts went viral and got 30,000 viewers in one day and the page used 28kB of space, I would need to pay (3 items * 30,000 GET requests * $0.005 / 1,000 GET requests * 28kB * 0.01 / GB) = $0.45. Not bad!

The final option investigated was GitHub Pages. This lets you host a website directly from your public GitHub repository. I am also familiar with GitHub. While it is free, this does not let me select the region for hosting the page. Therefore, this was not a valid option.

After all options considered, I decided to move forward with static hosting on Amazon S3 for hosting my blog, with a back up at in the event it went down or, worst case, I could no longer pay.

Now, let us look at the blogging platform choices.

Blogging platform

As much as WordPress gets a bad rep for not being a light-weight place to host content, it has millions of monthly-active-users. Transitioning to the new hosting engine, I wanted it to be simple, light-weight (so it could run on an cheap albeit underpowered virtual machine), and relatively secure. Furthermore, I wanted the resulting content to be performant on both mobile and desktop, with mobile being the primary form factor. Finally, as a stretch goal, commenting would be great and attract recurring viewers to my site.

I first considered Jekyll for the site generator. At a high level, it takes a text file and processes it into a webpage. While I have a more than adequate understanding of HTML+CSS, having to deal with the finer points of them would definitely detract from writing quality content. It enables me to write the posts in a blog optimised language like Textile. Referencing is updated each time my content is converted and published to the web.

Second alternative was Hugo. Hugo specialises in having partial `compilation’ compared to the monolithic compilation offered by Jekyll. While this might be great if I had 1,000’s of pages in my blog, I anticipate the size of it to grow the low hundred’s so I don’t think it makes sense to deviate from the most supported option.

Based on the above, I opted to go for Jekyll.

Moving forward

This is not a simple process and some additional voodoo will likely be required to enable SSL support and commenting support (likely with Disqus). Expect changes on over the coming weeks.

Creating a video montage with ffmpeg

With 1080p (and in some cases 2K) cameras now being standard on mobile phones, it’s easier than ever to create high quality video. Granted, the lack of quality free video editors on Windows / Linux leaves something to be desired.

I played with Blender VSE (Video Sequence Editor) to try and create a montage of my most recent motorcycle rides but the interface was non-intuitive and had a rather high learning curve.

So, I turned to the venerable ffmpeg to create my video montage.

Selecting the source content
Before jumping to the command line, you will need to gather the list of clips you want to join and have a basic idea of what you want to achieve. Using your favourite video player (VideoLAN Player, in my case), play through your captured videos and find the timeframe for trimming.

For the purposes of this tutorial, let’s assume this is my game plan:

Video effect 1: fade in from black
Audio track 1: filename "audio.mp3"
Video clip 1: filename ""; length 03:30 [mm:ss]; trim start 01:30; trim end 02:15
Text overlay 1: text "Touch Sensitive - Pizza Guy"; background partially transparent; font Arial; position lower left
Video effect 2: Cross fade
Video clip 2, filename "", length 00:50 [mm:ss], trim start 00:15, trim end 00:50
Video effect 3: Cross fade
Video clip 3, filename "", length 02:00 [mm:ss], trim start 00:45, trim end 01:55

Understanding ffmpeg
The ffmpeg documentation is extensive and well written. I highly recommend you spend some time familiarising yourself with the video filter section.

Let’s begin by understanding the file formats of our videos. For this tutorial, since they are all recorded by the same camera they will all share the same video / audio codecs and container.

$ ffmpeg -i getting_ready.mp4
Guessed Channel Layout for Input Stream #0.1 : mono
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '.\':
    major_brand     : qt
    minor_version   : 0
    compatible_brands: qt
    creation_time   : 2016-01-01 00:34:11
    original_format : NVT-IM
    original_format-eng: NVT-IM
    comment         : CarDV-TURNKEY
    comment-eng     : CarDV-TURNKEY
  Duration: 00:3:30.47, start: 0.000000, bitrate: 16150 kb/s
    Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080, 14965 kb/s, 30 fps, 30 tbr, 30k tbn, 60k tbc (default)
      creation_time   : 2016-01-01 00:34:11
      handler_name    : DataHandler
      encoder         : h264
    Stream #0:1(eng): Audio: pcm_s16le (sowt / 0x74776F73), 32000 Hz, 1 channels, s16, 512 kb/s (default)
      creation_time   : 2016-01-01 00:34:11
      handler_name    : DataHandler

Important items to note

  • In “Stream #0:0” information, the video encoding is H.264. We will keep this codec.
  • In “Stream #0:1” information, we can see that the audio is raw audio (16 bits per sample, little endian). We will be converting this to AAC in the output.

Trimming the clips
We will begin the effects by trimming the portions we need. As we will be adding effects later, we’ll leave 2 seconds on either side of the trim.

$ ffmpeg -i ./ -ss 00:01:30.0 -c copy -t 00:00:47.0 ./
$ ffmpeg -i ./ -ss 00:00:13.0 -c copy -t 00:00:39.0 ./
$ ffmpeg -i ./ -ss 00:00:43.0 -c copy -t 00:01:12.0 ./

Applying effects
Note: If you want to speed up processing time while you get the hang of this, you can scale the videos down and then apply the effects to the full size video once you’re satisified with the output.
$ ffmpeg -i ./ -vf scale=320:-1 ./
-1 on the scale filter means determine the height based on the aspect ratio of the input file.

First, I’ll show how to add the effects individually (at a potential loss of quality). Then we will follow up by chaining the filters together.

Let us apply the fade in/outs to the videos.

$ ffmpeg -i ./ -vf fade=t=out:st=45.0:d=2.0 ./
$ ffmpeg -i ./ -vf 'fade=in:st=0.0:d=2.0, fade=t=out:st=37.0:d=2.0' ./
$ ffmpeg -i ./ -vf fade=in:st=0.0:d=2.0 ./

Unfortunately, as H.264 does not support alpha transparency, we will need to use the filtergraph to let us apply alpha (for the fading) to the stream before outputting to the final video. First, let’s rebuild the above command as a filter graph.

$ ffmpeg -i ./ -i ./ -i ./ -filter_complex '[0:v]fade=t=out:st=45.0:d=2.0[out1];[1:v]fade=in:st=0.0:d=2.0, fade=t=out:st=37.0:d=2.0[out2];[2:v]fade=in:st=0.0:d=2.0[out3]' -map '[out1]' ./ -map '[out2]' ./ -map '[out3]' ./

This uses the filter_complex option to enable a filtergraph. First, we list the inputs. Each input is handled in order and can be access by the [n:v] operator where ‘n’ is the input number (starting from 0) and ‘v’ means access the video stream. As you can tell the audio was not copied from the input streams in this command. Semicolon is used to separated parallel operations and the comma separates linear operations (operating upon the same stream).

Next, let’s add the alpha effect and combine the videos into one output.

$ ffmpeg -i ./ -i ./ -i ./ -filter_complex '[0:v]fade=t=out:st=45.0:d=2.0:alpha=1[out1];[1:v]fade=in:st=0.0:d=2.0:alpha=1, fade=t=out:st=37.0:d=2.0:alpha=1[out2];[2:v]fade=in:st=0.0:d=2.0:alpha=1[out3];[out2][out1]overlay[out4];[out3][out4]overlay[out5]' -map [out5]

Next add the text overlay.

$ ffmpeg -i ./ -i ./ -i ./ -filter_complex "[0:v]fade=t=out:st=45.0:d=2.0:alpha=1[out1];[1:v]fade=in:st=0.0:d=2.0:alpha=1, fade=t=out:st=37.0:d=2.0:alpha=1[out2];[2:v]fade=in:st=0.0:d=2.0:alpha=1[out3];[out2][out1]overlay[out4];[out3][out4]overlay[out5];[out5]drawtext=fontfile=/Windows/Fonts/Arial.ttf:text='Touch Sensitive - Pizza Guy':fontcolor=white:x=(0.08*w):y=(0.8*h)"

Finally, let’s have the text appear at 5 seconds and dissappear at 10 seconds.

$ ffmpeg -i ./ -i ./ -i ./ -filter_complex "[0:v]fade=t=out:st=45.0:d=2.0:alpha=1[out1];[1:v]fade=in:st=0.0:d=2.0:alpha=1, fade=t=out:st=37.0:d=2.0:alpha=1[out2];[2:v]fade=in:st=0.0:d=2.0:alpha=1[out3];[out2][out1]overlay[out4];[out3][out4]overlay[out5];[out5]drawtext=fontfile=/Windows/Fonts/Arial.ttf:text='Touch Sensitive - Pizza Guy':x=(0.08*w):y=(0.8*h):fontcolor_expr=ffffff%{eif\\:clip(255*(between(t\,5\,10))\,0\,255)\\:x\\:2}"

At last, let’s add the audio track and fade it out.

$ ffmpeg -i ./ -i ./ -i ./ -i ./audio.aac -filter_complex "[0:v]fade=t=out:st=45.0:d=2.0:alpha=1[out1];[1:v]fade=in:st=0.0:d=2.0:alpha=0, fade=t=out:st=37.0:d=2.0:alpha=1[out2];[2:v]fade=in:st=0.0:d=2.0:alpha=0[out3];[out1][out2]overlay[out4];[out3][out4]overlay[out5];[out5]drawtext=fontfile=/Windows/Fonts/Arial.ttf:text='Touch Sensitive - Pizza Guy':x=(0.08*w):y=(0.8*h):fontcolor_expr=ffffff%{eif\\:clip(255*(between(t\,5\,10))\,0\,255)\\:x\\:2}" -shortest -map 3:0 -af afade=t=out:st=68:d=4

The final command, all together.  More information about PTS-STARTPTS can be found here.


ffmpeg -y -i ./ -i ./ -i ./ -i ./audio.aac -filter_complex "[0:v]fade=t=out:st=10.0:d=2.0:alpha=1,setpts=PTS-STARTPTS[out1];
 [out4][out3]overlay[out5];[out5]drawtext=fontfile=/Windows/Fonts/Arial.ttf:text='Touch Sensitive - Pizza Guy':x=(0.08*w):y=(0.8*h):fontsize=52:fontcolor_expr=ffffff%{eif\\:clip(255*(between(t\,3\,8))\,0\,255)\\:x\\:2}" -map 3:0 -af afade=t=out:st=52:d=4 -shortest

Cloning Logical Volumes on Linux

I recently damaged my Windows 7 installation upgrading to Windows 10. The root cause was my dual-boot configuration with Gentoo Linux. Due to having the EFI partition on a drive separate from the drive containing the Windows (C:\), the installation failed. The error message was entitled “Something Happened” with the contents “Windows 10 installation has failed.” This took a lot of time to debug, and unfortunately, using Bootrec and other provided tools on the Windows 10 installation medium did not resolve the issue.

Here are the steps I followed to back up my Linux data.

Created a LVM (Logical Volume Management) snapshot volume proving a stable image for the back up. The L parameter specifies how much space to set aside for filesystem writes that happen during the back up.
# lvcreate -L512M -s -n bk_home /dev/mapper/st-home

Mounted the snapshot volume. As I was using XFS, I needed to specify the nouuid option or the mount would fail with a bad superblock.
# mount /dev/st/bk_home /mnt -onouuid,ro

Used tar to back up the directory and piped the output to GPG to encrypt the contents (as this will be going to a external HDD not covered under my LUKS encrypted volume). Because this back up was only stored temporarily, I opted for symmetric encryption to simplify the process.
# tar -cv /mnt | gpg -c -o /media/HDD/st-home.tar.gpg

The above was repeated for each of my logical volumes.

After the backup completed, I removed the snapshot volumes.
# umount /mnt
# lvremove /dev/st/bk_home

I then created a checksum to be used later.

$ sha1sum /media/HDD/*.xz.gpg > checksum.txt

Next, I formatted both of my harddisks and let Windows partition my SSD as appropriate. According to this Microsoft article, Windows by default will create a partition layout as follows.

1. EFI partition [> 100MB]
2. Microsoft reserved partition [16MB]
3. Placeholder for utility partitions
4. Windows partition [> 20GB]
5. Recovery tools partition [300MB]

Because I wanted both Windows and the Linux root filesystem to exist on the same drive, I added a boot partition and a large LVM partition in the placeholder, resulting in the following scheme:

512Gb SSD
1. [256MB] EFI
2. [16MB] Microsoft reserved
3. [256MB] /boot
4. [192GB] LVM
5. [8GB] Free space
6. [192GB] Windows partition
7. [300MB] Recovery tools
8. Free space

Recovering my Linux configuration was as simple as booting from the Gentoo live CD, installing Grub to the EFI partition, and restoring the partitions from the snapshot.

Google public WiFI

Short post: when you agree to the terms and conditions of Google sponsored WiFi (e.g. at Starbucks) your DNS resolution settings are updated to point to Google’s DNS servers. While this does result in hands-off protection from malicious websites it also enables Google to track your browsing habits and gather a large representative sample of the habits of people that use that particular WiFi network.

In Linux, look at your /etc/resolv.conf to determine if your DNS server has changed. Google’s servers are: and

I recommend checking this file each time you connect to a public WiFi network.

Arch Linux on Dell M6800

I have not upgraded my computer systems for a while and have been using a combination of Windows 7 and Mac OS X as my main OSes. Recently, I had the opportunity to purchase a Dell M6800. Below is a walkthrough of how I got Arch Linux configured on this monster of a machine.


– Blazing fast machine, faster than a top-of-the-line Macbook Pro (mid-2015) Retina model tested at the Apple Store on loading webpages (tested NYT, The Verge, BBC, and Engadget).

– Too large to conveniently lug around
– Heavy

Applications Used: Mutt (email), Firefox (web browsing), rxvt-unicode (terminal), dwm (window manager), irssi (irc)


This computer was configured with two hard discs: a 512GB SSD and a 512GB HDD. The first was used as the boot drive. It has been a long time since I configured a hardware (not VM) Linux machine from scratch and spent several hours selecting the optimal partition layout to set me up for the next 5 years. I wanted a partition layout that was easy to reconfigure, hard to configure improperly, and encryptable.

Before even considering the layout, I had to choose which partition table format I would go for. In the past, I would opt for the Master Boot Record (MBR) format which gave me interoperability with Windows. Now, with Windows 8 and beyond requiring UEFI support for Windows 8 certified PCs, there is no real reason to stick with MBR. For this machine, I selected the GUID partition table (GPT).

With that in mind, I considered the following scenarios:

(1) Simple scheme using GPT
In the past, due to the requirement of dual-booting for university, I had opted for a MBR configuration. I considered for this machine the following layout.

Partition 0: EFI boot, 256MB
Partition 1: Windows, 256GB
Partition 2: /boot, 256MB
Partition 3: / (root), 64GB
Partition 4: /home, remainder of space

Benefits of this layout was that it was simple. Drawbacks: could not easily modify in the future without copying data to an external disc and copying back. Also, it would be a waste of space if I did not use Windows.

Furthermore, with /home and / separated, I would have to set up encryption twice so I could make mistakes that would render encryption useless or open up attack opportunities.

(2) Linux Unified Key Setup upon Logical Volume Management (LUKS upon LVM)
This setup would give me all the flexibility of LVM with the added benefit of encryption. This means, I could extend a logical volume within Linux across multiple drives in the event my SSD ran out of space or, say, I wanted to implement a RAID configuration. Unfortunately, again, this would require multiple partitions and key configurations which would be cumbersome to manage.

(3) LVM upon LUKS
This setup would prohibit me from doing the above in (2), namely, spreading out partitions across physical media. However, it would be the easiest to configure and would give me encryption across my entire drive. Because I had an SSD, I was not too concerned about any r/w performance penalty that I would likely encounter having all these abstractions in place. Here is the partition scheme I opted for, using the GPT.

Partition 0: EFI system, 256MB
Partition 1: Linux boot, 256MB
Partition 2: Linux LVM, remainder of space, encrypted

You will notice that a Windows partition isn’t included here. Because I am using LVM, I can create space for it if I do decide to install Windows upon a separate partition.

I configured the above with the following commands (after booting into the Arch Linux setup disc).

Using GDisk, performed as root

#> gdisk /dev/sda #configure my SSD
(gdisk)> o # create a new GPT
(gdisk)> n # create the EFI partition
(gdisk)> [enter] # default partition number
(gdisk)> [enter] # default sector
(gdisk)> +256M # make the partition size 256MB
(gdisk)> ef00 # make the filesystem type EFI
(gdisk)> n # create the boot partition
(gdisk)> [enter] # default partition number
(gdisk)> [enter] # default sector
(gdisk)> +256M # make the partition size 256MB
(gdisk)> 8300 # make the filesystem type Linux FS
(gdisk)> n # create the LVM partition
(gdisk)> [enter] # default partition number
(gdisk)> [enter] # default sector
(gdisk)> [enter] # up to last sector
(gdisk)> 8e00 # Linux LVM filesystem type
(gdisk)> w # write changes to disc

Using LVM, performed as root

#> lvmdiskscan # list available disks found by lvm
/dev/sda1 [ 256.00MiB]
/dev/sda2 [ 256.00MiB]
/dev/sda3 [ 465.26MiB]
0 disks
3 partitions
0 LVM physical volumes
#> pvcreate /dev/sda3
#> vgcreate root /dev/sda3
#> lvcreate -L 32GB vg0 -n root
#> lvcreate -L 8GB vg0 -n tmp
#> lvcreate -L 8GB vg0 -n swap
#> lvcreate -L 64GB vg0 -n home

Create the file systems
In the past, I would have went for ext4, but wanted to really make sure I was taking advantage of my SSD (despite the performance penalty from LVM and encryption), so I went with XFS.

#> mkfs.xfs /dev/vg0/root
#> mkfs.xfs /dev/vg0/home
#> mkfs.xfs /dev/vg0/tmp

So, now I have achieved my initial scenario. The Arch Linux Installation guide shows how to mount the file system and install packages as appropriate.

Other Callouts on the Dell M6800

– At times, I encountered some rather scary looking write errors (likely due to the write scheduler being used). I prevented further occurences by adding libata.force=noncq to my GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub and regenerating the configuration.
– I did not remember to install wpa_supplicant to connect to WiFi so I had to procure an Ethernet cable to download it before configuring netcfg to connect to my home network.
– Audio was enabled on Firefox by installing its optional dependencies (use pacman -Qi firefox to list them) and installing pulseaudio and pulseaudio-alsa. Remember to turn off the suspend-on-idle module to prevent pops when playing videos.
– The included Nvidia graphics card is meant for an Optimus configuration, so use the Intel supplied graphics card as the default. You can use bumblebee to offload 3D rendering applications to the Nvidia graphics card.
– Not Dell specific but I needed to remember to update my resolv.conf when connecting to OpenVPN servers. This fixed my DNS resolution issues.


00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor DRAM Controller (rev 06)
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller (rev 06)
00:02.0 VGA compatible controller: Intel Corporation 4th Gen Core Processor Integrated Graphics Controller (rev 06)
00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller (rev 06)
00:14.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI (rev 04)
00:16.0 Communication controller: Intel Corporation 8 Series/C220 Series Chipset Family MEI Controller #1 (rev 04)
00:19.0 Ethernet controller: Intel Corporation Ethernet Connection I217-LM (rev 04)
00:1a.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #2 (rev 04)
00:1b.0 Audio device: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller (rev 04)
00:1c.0 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #1 (rev d4)
00:1c.2 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #3 (rev d4)
00:1c.3 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #4 (rev d4)
00:1c.4 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #5 (rev d4)
00:1c.6 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #7 (rev d4)
00:1c.7 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #8 (rev d4)
00:1d.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #1 (rev 04)
00:1f.0 ISA bridge: Intel Corporation QM87 Express LPC Controller (rev 04)
00:1f.2 SATA controller: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 04)
00:1f.3 SMBus: Intel Corporation 8 Series/C220 Series Chipset Family SMBus Controller (rev 04)
01:00.0 VGA compatible controller: NVIDIA Corporation GK104GLM [Quadro K3100M] (rev a1)
03:00.0 Network controller: Intel Corporation Wireless 7260 (rev bb)
11:00.0 SD Host controller: O2 Micro, Inc. SD/MMC Card Reader Controller (rev 01)

uname -a

Linux london 4.1.5-1-ARCH #1 SMP PREEMPT Tue Aug 11 15:41:14 CEST 2015 x86_64 GNU/Linux

iOS 9 Public Beta 1 Changes

Apple has introduced a number of positive changes with iOS 9. This post details them the items I have found through ad-hoc usage.

– Mail app has icons when swipe is shown
– Selecting and deselecting text now has animations
– Passbook has be renamed to Wallet
– Swiping down from the home screen has more searchable features; equivalent to swiping left on the home screen
– Low battery mode
– Double-tap home button shows new switching mode
– Settings app now has search
– Apple Music is now default on US builds
– Maps has transit directions for some markets
– Context menus are now more Palm-esque, as in, do not extend across the screen
– More sharing options
– Notes supports additional media
– Recommended apps to use now shows up reliably on the lock screen

Find any other changes? Add comments below.

PDF Minimalist 2015 Calendar for Printing

I needed a simple calendar to track some projects and could not find a one online that matched my requirements without a watermark.  I spent a few minutes creating a minimalist calendar for 2015 (including the months already passed).

Feel free to use it to track your projects and let me know if it was useful.


  • Horizontal entries per day
  • Week and day number on each entry
  • ISO8601 date and time formatting

Download: 2015 Calendar (letter) (A4)

Setting up a git server accessible via ssh

For small personal projects I often use git to track my work. Sometimes, I’ll work from a different computer and wish I could clone the repository and continue where I left off. I recently set up a git server that allowed me to do this and all I needed was ssh access.

Provided you have a remote server located at gitserver with user admin, you can set one up by doing the following.

Log in to your server and create the git user.
user@local $ ssh admin@gitserver
admin@gitserver $ sudo useradd -m git

Locally, create the ssh-key pair that you’ll using to log in to your server and copy it over.
user@local $ ssh-keygen -t rsa -f ~/.ssh/id_rsa_gitserver
user@local $ ssh-copy-id -f ~/.ssh/ git@gitserver

Create a placeholder for the repository that you want to track. --bare is used here because you’ll be pushing your current repo to the server.
user@local $ ssh -i ~/.ssh/id_rsa_gitserver git@gitserver
git@gitserver $ mkdir ~/repo.git
git@gitserver $ cd !$ && git init --bare

Push your local copy over to the server. Here, I start a new shell with ssh-agent so key management is handled transparently.
user@local $ ssh-agent bash
user@local $ ssh-add ~/.ssh/id_rsa_gitserver
user@local $ cd ~/repo
user@local ~/repo $ git remote add origin ssh://git@gitserver[:port]/home/git/repo.git
user@local ~/repo $ git push origin master

Finally, lock down the git account
user@local $ ssh admin@gitserver
admin@gitserver $ sudo chsh -s /usr/bin/git-shell git

As a general ssh security tip: make sure that password-based login is disabled and public-keys are required when logging in to the server.

Now you’re all set! You can push changes as usual by using git push.

Google’s addition of C class stock

This is a big announcement and I thoroughly expect it to be approved at the meeting this June.