#tech

Be aware when updating UniFi Controller from 5.3.x to version 5.4.x and running WPA Enterprise secured wireless networks. There is a good chance that your network ends up completely unprotected while to the administrator everything looks fine. Both versions are official “stable” releases, so there is a good chance lots of networks get affected.

Whoopsie daisies!

Reason for this appears to be the addition of “RADIUS Profiles”, more specifically the migration of existing settings to this profile. Prior to 5.4 each WLAN could have one RADIUS server assigned for authentication and VLAN assignment. The update to 5.4 appears to be faulty in a way that the old RADIUS information gets lost and no profile gets created. Once the APs fetch that new configuration, for example when restarting, they get a “null” value and fall back to “Open” security configuration. Yeah right, from “WPA Enterprise” to “Open” just by updating your controller! On top of that, UniFi Controller does pretend that the network is still “wpaeap” secured, so if you’re running a remote WLAN site you may not even be aware of the fact that anyone can access your network without authentication.

To identify the issue, check the actual WLAN settings by scanning the network and second look out for the following log-file entry at UniFi Controllers server.log file.

[2017-01-18 23:41:30,160] WARN uap - invalid radiusprofile_id: null

I contacted Ubiquity Networks about this and they seem to be aware of the issue. However instead of accepting it as a massive vulnerability they just claim it to be “just a bug at the update”. UBNT likes to play at the Software Defined Networking (SDN) league, sensitivity for security issues at the “software” part does not seem to be a priority though. Lets see how quickly this gets handled in a serious way once some company networks unexpectedly “Opened”…

Starting point

Some years ago i started building a home network around a Synology DS214 NAS/media server and upgraded my WLAN to an Asus RT-AC66U, added a managed LGS308 switch and a PLC connection to connect the media rack. Overall that served me well and the level of hardware/software integration felt quite ok.

While i would rate that setup as quite sophisticated for its league, it became evident that my home network was a mess with regards to fault tolerance and actually very consumer’ish in its topology. There are those typical drawbacks that you’re used to live with as a consumer at a rental home without structured wiring. For example the WLAN Access Point had to be located close to power outlets and the VDSL modem which had to be close to the landline socket. In my case that meant the APs placement was really lousy in terms of radio signal. Adding better antennas and tweaking the firmware (dd-wrt rocks!) was a workaround but certainly not a solution. Updating some router settings essentially brought down all network communication since the AP/Router/Switch services run on a single box. Meh.

Running all these services (DNS, Web, Mail, RADIUS, File…) on one small box that was originally meant to stream some media was a clear single point of failure. The ecosystem around Synology is really nice for a NAS manufacturer, however it’s based on highly customized, sometimes outdated and restricted versions of the original services. They clearly address the consumer space with their smaller boxes, which means neither virtualization, hardware-accelerated encryption or ability to upgrade without dumping the whole system.

The plan

So i took some time and planned a “what if…” scenario of rebuilding my home IT infrastructure with taking the known constrains into consideration. As for networking equipment I learned about Ubiquiti Unifi some years ago and was quite interested in its positioning with regards to software-defined networking at a very compelling price-point. Now i finally had a chance to start playing with it.

Since i was not just re-doing the network part but essentially my whole home IT, i started to think about options for growing needs in terms of services, bandwidth and media consumption (like 4k). It was clear that upgrading to a more powerful NAS would not cut it from a performance standpoint neither when looking into how painful “real” custom service configuration was. At the same time i like to keep critical data closeby. The logical conclusion was to look for some “real” server metal.

That immediately bought up the problem of where to put all that stuff. I do like tech but at the same time i don’t want my home to look like a radioshack dump. Long story short, i obviously needed a rack to put all these new gadgets. 19” gear takes quite some space but can be managed so much easier than all those different form-factor devices. On top i could simply move the rack in one piece when relocating and would essentially contain my IT playground.

After some iterations the plan was quite clear to me:

  • Replace the existing consumer hardware with 19” stuff
  • Look for entry-level enterprise gear
  • De-couple wireless network access and actual infrastructure
  • Put all this to a rack

Hardware

Server

After some looking around i decided to go DIY on the server since most “serious” servers are total overkill and simply not designed to run quietly in a residential home. Those home-servers on the other hand were simply not powerful nor redundant or really upgradable. Having a history of building machines for some times the shopping list assembled itself fairly quickly:

  • Intel Xeon E3-1260L CPU
  • Asus P10S-I mainboard
  • 32GB DDR4 memory
  • Noctua NH-L9i fan
  • Samsung SM951 M.2 SSD
  • 3x WD RED 4TB HDD
  • Seasonic SS-300M 1U PSU

After completing the build it turned out that the 1260L is a bit oversized for my needs, the 1240L would have done the job just as well. Anyway, some extra max core speed won’t hurt.

Rack & Case

I planned to put the rack beneath my desk, an area that always felt like unused space. At the same time that severely limited my options in terms of depth since i still had to sit there. Luckily i found a vendor that offered both short racks and small DIY server cases:

  • Cablematic RackMatic 9U WK13
  • Cablematic RackMatic 2U CK91

The downside is shipping it from Spain which makes it a bit pricey but still far below the typical premium for a assembled system. The build quality and utility is not on-par with professional racks like Rittal, but for the price you get some really good stuff.

Network

So here i was looking for a medium size home network with about 15 wireless and 10 wired clients and the “want” to centrally manage all this. Having looked into the Unifi universe, my Ubiquiti shopping list read like this:

  • Unifi Security Gateway Pro (“USG”)
  • UniFi Switch US-16-150W (“USW”)
  • UniFi AP AC PRO (“UAP”)

Compared to the switch, the USG does not seem to speed down its fans after starting, which makes it terribly loud. This is a minor and fixable downside but disappointing that Ubiquiti did not do it right for two components of the same product range. Therefor i had to replace the cheap 40mm fans at the USG with one Noctua NF-A4x10 FLX. Airflow may suffer but the box runs stable and thermal monitoring shows acceptable values.

By using a bit of creative wiring the Access Point could get positioned almost perfectly at the center of the apartment, being powered using PoE, while the rack with all the other hardware could be placed in a more discrete place. Whats left to add was a simple 19” VDSL router (that should only serve as a modem to the USG) and a VoIP DECT phone base station which gets powered by PoE as well.

  • ZyXEL SBG3500 (VDSL)
  • Panasonic KX-TGP600

Power

Power outages luckily are quite rare in my area and maintenance usually happens at night. However that would introduce some issue with having a always-on server with a unbuffered RAID. Therefor i’ve chosen a UPS that handles about 15 minutes of autonomy before the server shuts itself down automatically. Having PoE capable hardware also allows to continue WLAN and DECT connectivity during that time.

  • Eaton Ellipse Eco 650

The total power consumption of the rack at normal operation is 75W. Interestingly enough the network equipment accounts for more than half of that, i’d expected the server to use much more than 30W.

Putting all this together got me this nice 9 U rack setup:
Rack

UniFi AP AC PRO

Software

To put the server to optimal use i decided to run Proxmox VE as virtualization environment and a encrypted Linux MD software-RAID 5 configuration with LVM to store the VM images. Off-site backup was done using SpiderOak at first but switched to good ol’ rsync later due to reliability issues with their proprietary software.

For management and storage i looked into OpenStack and Ceph first but got turned off by the infrastructure needs, such a solution is quite nice but obviously oversized to run like 10 static VMs. Speaking of VMs, i separated the services in a way that each machine can got to maintenance without affecting other services too much:

  • Unifi Controller
  • Authentication (LDAP, RADIUS, oAuth2)
  • Web (Proxy, Webserver, Git)
  • Nameserver (PowerDNS)
  • Log/Monitoring (Splunk, Sensu)
  • Mail (Dovecot, Postfix, OX App Suite)
  • Files (Samba, Serviio, netatalk)
  • VPN (OpenVPN)
  • VoIP (3CX PBX)

Proxmox VE

Getting into the detail of the software part would certainly exceed the scope for now. Be assured that setting all this up took almost a week but finally i’ve my manageable, scalable and reliable home network environment :)

Just over a month ago i got very lucky to apparantly become one of the first owners of a Samsung SSD 960 Pro M.2. Order queues are delaying this thing for about 3 months, quite the same distribution mess we experienced with the 950 Pro a year ago. Anyway, be assure that the wait is absolutely worth it. Coming from a already not-so-shabby SM951 the 960 really kills it, but watch for yourselves…


*/ no comment */

long time no post. I’ve a couple of topics on my backlog but simply too few hours to properly put them here, what a pity. Now, one topic i’d like to share is something i’ve been experimenting with for about a year: Everyday carry or EDC - stuff which you drag around almost every minute of your day. There are lots of approaches as there are unlimited personas and priorities. Certainly mine may not be relevant for everyone else, but i’m quite sure someone will find it useful.

My priority on EDC is utility and simplicity. I dislike the idea of carrying a lot of items where i might only use a subset. Looking at my day-to-day carry, i came up with a very unsurprising list:

  • Keys
  • Phone
  • Creditcard
  • Other cards, cash, USB storage

Keys

I started collecting all kinds of keys i would eventually need - and not surprisingly i ended up carrying 20 keys all the time. Janitor style. The drawback is obvious: keys are clunky, loud and uncomfortable to sit with. I iterated to split my keys for use-cases like “work”, “home” and “other” by adding them to individual keyrings that i could connect using carabiners. It eased the problem but of course was just a nice solution for a unnecessary problem. Next iteration were keyholders. I started with the “Carbocage Keycage” and reduced the number of keys to fit the keyholder.

Keycage came out of a long list of more or less successful (Kickstarter) projects that aim to provide a good solution for key carrying. I reviewed about 25 of those, there are nice ideas among them but the rather simple construction of Keycage matched my requirement. The product is a carbon fiber cage (hence the name) that uses long screws to connect and keep the keys in place. Keys can be organized by re-ordering them to nicely fit together. Keycage is nicely constructed, lightweight and good looking. However, a severe issue arrises when disassembling the keyholder to remove or add keys: Carbon and metal do not connect very well. After some assembling the nut loses friction with the carbon and there is basically no way to get your keys out of the holder other than destroying it, which turns out to be quite complicated due to the good construction.

"Disassembling" Keycage

Next i went to “KeyBar”, which was out of my scope due to availability aspects. It’s clearly a heavy-duty product and comes with some nice color, material and finish options. I chose the Titanium model built by EOS. Pricing is steep but this thing is virtually indestructible and thoughtfully made, obviously at the cost of price and weight. I am delighted with the product though, it does exactly what it’s expected to do and looks nice.

Carbocage (bottom) and EOS Titanium KeyBar (top)

Phone

Considering myself as a mobile power user, i chose a iPhone SE for mobility, battery life, speed and accessibility. Even though having large hands that 10cm (4”) screen works perfectly for me without taking too much space. Upgrading from a iPhone 5, i still like the design, dimensions and as usual the build quality has no equal.

With regards to case/bumper, i literally checked out hundreds of cases and came up with a Xcase case that allows access to one card, which happens to be a credit card as my primary payment options. Cash fits there as a alternative. It does its job terrifically well without adding clutter, thickness or costing a fortune. It certainly has not a spectacular look, which i like, but the utility of it is great. You can easily flip out the card with one hand. Note however that the card will wear off a bit more than usual by covering it with plastic and creating friction when sliding it out. Update: Just recently i had to re-purchase one and the quality got really bad, some parts of the case start to break after a few weeks already. Therefor is switched to a Ozaki O!coat+Pocket which follows the same concept but is much more flexible and less likely to break.

iPhone SE credit card case

Cards & Cash & Storage

While i’m comfortable with carrying my phone and credit card most of the time, there are good reasons for carrying a couple more cards. For example health insurance, licenses, debit and access cards. I had several purses over time and got stuck with the Golden Head Colorado 1231-05-8 which sports a total of 21 slots for cards. I use 10-14 of those, which allows some spare space while keeping its profile low. For me its the optimal solution and boy is this thing done well.

To have some “emergency cash” with me all the time, i put a EUR note to a waterproof aluminum cash stash from True Utility “Cash stash” (TU241). While it might not survive 20 years at a keychain it certainly is reasonably well done and compact. For mobile storage i chose a 32GB JetFlash 710 “stick” from Transcend which comes with a nice metal enclosure and provides a lot of speed and capacity at a low price. Those items are connected to my keychain of course.

Some time ago i decided to cut cords and go wireless on my audio equipment. I listen a lot to music when commuting or working but also regulary use a headset for computer games. Carrying a fixed line headphone around all the time is a mess. Going wireless there are basicially two options, either proprietary RF or Bluetooth. The latter has the advantage of being implemented in almost all kind of mobile equipment while proprietary RF is often optimized for range. Bluetooth offers plenty of throughput and codecs like aptX provide real good stereo sound quality at 352kbit/s when using simplex (e.g. listening to music). This however is not true when also using the microphone of the headset - in this case A2DP falls back to its mono “hands-free” profile which sounds like a analog mobile phone 25 years ago. It’s beyond my understanding why there is no “good enough” duplex operation mode for high-end headsets.

Anyway, since this is not my primary use-case i evaluated lots of Bluetooth headsets and chose the Sennheiser Urbanite XL Wireless. The thing with companies like Sennheiser is that they actually know how to build audio equipment in a proper way and are not purely focused on marketing and fashion. They run circles around those fancy Beats headphones when it comes to battery life, durability, utility and sound quality. Using this headset with my phone and workstation computer is bliss. On my Mac Book Pro (2013) however, it was just a never ending pain.

While connecting and operating the headset works perfectly most of the time, in some situations the transmission drops every couple of seconds for a fraction of a second which drives one crazy when trying to focus or just enjoying music. There are lots of guides and suggestions to fiddle around with Bluetooth parameters on OSX to “solve” this kind of issue. Sadly none of those helped in my case, probably because those workarounds changed the bitrate while my setup does not seem to have any kind of bandwidth or connection quality issues.

What helped to reduce the number of connection drops was wiping and re-installing OSX. I think parts of the problems were related to changing those Bluetooth paramters. Still, the issue appeared every now and then. To solve it, i finally decided to bypass the built in Bluetooth module, which is actually integrated to the Wi-Fi module. Changing this module requires to get a $100 Apple proprietary replacement and rip apart the machine to install it. As an alternative, i got a small €15 ASUS USB-BT400 dongle with Broadcom chipset, which supports aptX. There are even smaller dongles but the smallest ones are not built to safely stay in place on the USB port. Others were a bit bigger and blinking like mad.

Obviously there is no easy way to disable the integrated Bluetooth hardware at System Preferences or elsewhere. In order to do so, you need to download Apples “Hardware IO Tools” from the developer portal. For that you need a Apple Developer subscription ($99/y) or know someone who has access. Make sure to get the latest version which usually supports the latest version of OSX. While those tools are an extension to Xcode, there is no need to actually install Xcode.

After downloading, open the disk image and launch “Bluetooth Explorer”. This tool gives access to a lot of Bluetooth functionality on OSX. We just need the “HCI Controller Selector” from the “Tools” menu, or simply press CMD+K. There you see a list of all present Bluetooth controllers, just select the non-Apple one and hit “Activate”. Removing the Bluetooth dongle may however reset the default.

Enable USB dongle as default

Verify that the USB Bluetooth dongle works fine. To make the external dongle the default even after reboot, use the following command:

1
sudo nvram bluetoothHostControllerSwitchBehavior=always

Then reboot. To revert to the original settings, you may reset the machines NVRAM or use:

1
sudo nvram -d bluetoothHostControllerSwitchBehavior

Just in case you did not yet hear about “Let’s encrypt”, let me explain the context in a few words. Feel free to visit their website for more details. They’re starting their public Beta as early as 2015-12-03.

What is it about?

Services like websites are offering secure connections to ensure privacy. Protocols like HTTPS are used which require SSL certificate(s) and a private key to encrypt data in transit. Besides enabling encryption, a certificate makes sure you’re communicating with the service you intend to communicate with and not with someone who is intercepting your communication. Sending encrypted data to someone who can de-crypt it but is not the intended recipient is even worse than communicating in plain-text.

Certificates are signed by a certificate authority (CA) which acknowledges that the person which uses a certificate exists and is trustworthy. This is usually determined by checking the services domain via E-Mail validation or the person/organisations existance by checking paperwork. Software like web browsers use a predefined directory of trusted CAs to determine if they trust the judgement of a CA. For example, if Firefox trusted “Cool CA Inc.”, then certificates signed by this CA are also trusted and if i have a certificate signed by this CA, your browser trusts the connection to my website.

Since it’s quite hard and expensive to be trusted by all browser vendors only few CAs are trusted by default. Browser vendors obviously have to make sure that they can trust a CA since their judgement will affect millions of users. CAs which made it that far compensated their effort by making a fortune out of a technically trivial process of validating persons/websites and create certificates. That said, they’re also offering insurrance services but that’s rather irrelevant for most users. Taking money to enable security contradicts the general requirement of secure communication for everyone. Someone running a small website may not be able to invest a lot of money to get a SSL certificate signed by one of those CAs. One alternative would be using self-signed certificates which are not trusted by browsers and produce severe error messages when opening the website. Another alternative would be not to use encryption at all, which sadly is what many people have chose.

Let’s encrypt

This is where “Let’s encrypt” joins the party. They’re a non-profit organisation backed by organisations like Mozila, the EFF and some well established online companies. The organisation offers certificate signing for free and those certificates are trusted by all major browsers. You can find out a lot more information and details at their website.

A speciality of “Let’s encrypt” is that you can request certificate signing in a automated way and also renew your certificates this way. This is done by running a software which takes care about the request and signing process. Renewing is a standard process which usually needs to happen about every 1-3 years. Certificates are bound to a specific domain and domain ownership can change over time. Therefor unrestricted certificate validity for a specific domain can become a threat because they’re not validating the current domain owner anymore. “Let’s encrypt” wants to make sure this problem is very temporary, therefor they opted to make their certificates valid for 90 days. After or before that expiration date you need to renew the certificate and prove that you’re still owning the domain.

Automatically renewing certificates with ones that are valid for another 90 days is a big relief for webmasters that no longer have to run through a manual process. It’s a common issue that a certificate expires without notice and website visitors are getting error messages, this also gets solved by automation.

Automated renewal

So how to automate the process? “Let’s encrypt” offers a software to fetch/renew certificates and also validate if you’re still owning the domain. For that, the software needs to run on the server defined by the DNS entry for the specific domain. The software initiates the validation process (ACME) and a web service at “Let’s encrypt” communicates with that software. During that process ports 80/tcp and 443/tcp are used by default which means your webserver needs to go down for a couple of seconds to allow the software to use those ports. That downtime may be worked around in the future but for now i don’t care much about it. “Let’s encrypt” is working on integration with major web servers such as Apache or Nginx to make the process completely transparent. Until then some scripting is required, which i’d like to share.

To use automatic renew, first make sure your web server (nginx in my case) loads certificates from the letsencrypt folder.

1
2
ssl_certificate /etc/letsencrypt/live/martin.heiland.io/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/martin.heiland.io/privkey.pem;

Stop the web server and initiate the renew process. Note that a few options need to be set to force non-interactive renewal:

1
2
$ service nginx stop
$ /path/to/letsencrypt/letsencrypt-auto -d martin.heiland.io --renew-by-default --rsa-key-size 4096 --agree-tos --agree-dev-preview --server https://acme-v01.api.letsencrypt.org/directory auth

Note that due to a bug, this command may fail several times with a “The client sent an unacceptable anti-replay nonce” error. In this case, just repeat the command until it succeeds.

If HPKP is used, you will need to refresh your pins:

1
2
3
4
$ openssl x509 -noout -in /etc/letsencrypt/live/martin.heiland.io/fullchain.pem -pubkey | openssl asn1parse -noout -inform pem -out public.key
$ openssl dgst -sha256 -binary public.key | openssl enc -base64 > hash.key
$ PIN1=$(cat hash.key | sed 's_/_\\/_g') && sed -i 's/add_header Public-Key-Pins.*/add_header Public-Key-Pins '\''pin-sha256=\"'"${PIN1}"'\"; pin-sha256=\"sRHdihwgkaib1P1gxX8HFszlD+7\/gTfNvuAybgLPNis=\"; max-age=31622400'\'';/g' /etc/nginx/conf.d/martin.heiland.io.conf
$ rm public.key hash.key

Finally, restart the web server and check if it’s serving the updated certificate:

1
$ service nginx start

You obviously can put this process to a monthly cronjob, but you might to supervise the process a couple of times to make sure it works flawlessly.

As soon as Samsung finally made the SM951-NVMe SSD publicly available (they paper launched it 6 months before), i got the 512GB model (MZVPV512HDGL) via some “back channel” sale to build my new photography and gaming rig. Since it’s a OEM product the vendor does not sell it to consumers nor offers any drivers or product support. Typically this is not an issue for a SSD but in this particular case some M.2 PCIe tweaking via EFI settings was required and the default Windows 10 NVMe driver provided dismal sequential throughput of 80 MB/s while the drive can do about 2400 MB/s, depending on block size and queue depth. Guess thats what you call a early-adopters tradeoff. As a temporary workaround, NVMe drivers for Intel datacenter SSDs could be used to replace Windows’ default driver.

Don't get fooled by its unobtrusive appearance, this thing is a beast.

Today i noticed that Samsung has released a proprietary driver along with the SSD 950 Pro product launch. The 950 Pro is basically the same SSD just with stylish PCB coloring and “3D V-NAND” technology (Samsungs lingo for TLC) rather than MLC like the SM951 uses. While TLC is a more recent technology and offers better density (aka. cheaper to manufacture huge chips), it’s not as quick when writing data and should be a litte less reliable than MLC. The SM951 may be older but could still be a superior SSD technology-wise. It turns out that this driver is also compatible to the SM951 SSD and provides comparable throughput to Intels driver.

Now we're talking.

Samsungs driver can be downloaded here. Note however that Samsung Magician will still not fully support this drive since it appears to check for its identifier and offers limited capabilities for Samsung OEM and non-Samsung drives.

Holiday time, home network improvement time :)

IPv6

like most ISPs in Germany, Telefonica/O2 STILL does not provide IPv6 to their residential customers. During the past year i’ve been using the IPv6 tunnel broker offering from Hurricane Electric (HE). A alternative service would have been SixXS, but besides having multiple local PoP, it lacks some functionality and i had very bad experience with their support team. Being called a liar and getting insulted because of a typo or overcautious fraud detection system is not nice guys.

HE provides you with a /64 network and a optional /48 if 18446744073709551616 hosts simply don’t cut it and you rather need 1208925819614629174706176. HE turned out to be super reliable, fast and of great value - well, it’s free. The documentation is a bit scarse and the user interface obviously targets experienced users. However, i made my way through and also migrated my domains DNS/rDNS services there. A real killer-feature is the included DDNS (Dynamic-IP-to-DNS mapping) option, so you can update and assign a dynamic IP to a regular A or AAAA record without CNAM’ing via one of those dyndns domains. Especially mail servers don’t like CNAMEs for sender domain/servers. Thanks HE, you’re awesome!

Now, having my router configured for DDNS and tunneling IPv6 is one thing but i wanted to use native IPv6 for all clients within my home network. Turns out that dd-wrt, which powers my Asus RT-AC66U router, has solid support for RA (Router advertisement) and DHCPv6. Certain features do not work reliable with specific dd-wrt beta (aka. recent) builds so i had to trial&error myself to “v3.0-r27858 giga” to find a “good” build. Configuring IPv6 is straight forward but some components, especially wide-dhcp6, are very picky about their syntax and not very verbose when it comes to errors. Therefor i’m sharing the configuration, i hope it will help others and spare some frustration.

A central thing to understand with IPv6 is that DHCP works a bit different compared to IPv4. In good’ol IPv4 DHCP provided clients with information about DNS, Gateway/Router, Subnets and of course a IP address. With IPv6 those tasks are split between RA and DHCP. RA takes care about providing router information to the local network while DHCP assigns everything except router information. RA can actually also provide prefix information, which makes clients pick a random IPv6. Most small networks will just work fine with RA but DHCPv6 is more powerful in terms of assigning ranges or even do reservations for specific hosts. Therefor i chose to go with RA and DHCPv6 to learn some stuff in the processb.

To configure DHCP for IPv6, enable “Dhcp6s” at the “Setup” -> “IPv6” tab of dd-wrt. Also enable “Dhcp6s custom” and provide configuration like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
option refreshtime 900;
#option domain-name-servers 2001:470:1f0a:d2a::2;
option domain-name "heiland.io";

interface br0 {
allow rapid-commit;
address-pool home 3600;
};

pool home {
range 2001:470:1111:aaaa:acab:c0ff:ee:1 to 2001:470:1111:aaaa:acab:c0ff:ee:ffff;
};

host vip {
duid 00:01:00:01:1d:9f:e9:8d:20:c9:aa:bb:cc:dd;
address 2001:470:1111:aaaa:acab:c0ff:ee:1337 infinity;
};

The domain-name-servers option is commented out since i chose to distribute DNS resolver information via radvd. Usually that would be a task that gets handled by the DHCP server, but my efforts so far were not working out. For some reason the local address of the router was propagated as DNS server even though i’m not running a DNS cache or forwarder there. This could be some dd-wrt quirk.

I’m distributing search-domain and IPv6 client information via DHCP. As an example i added a specific host that shall get a reserved IPv6. Note that you can assign multiple IPv6 ranges to multiple interfaces if needed. Compared to IPv4 DHCP, hosts are specified via their DUID instead of just their MAC address. The MAC address of the network card is still part of DUID but it gets prefixed by a timestamp that gets generated by software, usually when installing your OS.

Getting your clients DUID is a bit more complicated than just getting a MAC address. Johannes Ullrich posted a nice article about where to find it on various operating systems. Again, wide-dhcp is very picky about syntax, duid 0:1:0:1:1d:9f:e9:8d:20:c9:aa:bb:cc:dd would not work properly while duid 00:01:00:01:1d:9f:e9:8d:20:c9:aa:bb:cc:dd does.

RA is implemented by the radvd service and that gets enabled at the same page. “Radvd config” allows to specify some more details, like this:

1
2
3
4
5
6
7
8
9
10
11
interface br0
{
AdvManagedFlag on;
AdvSendAdvert on;
AdvOtherConfigFlag on;
MinRtrAdvInterval 3;
MaxRtrAdvInterval 10;
RDNSS 2001:470:20::2 2001:418:3ff::53
{
};
};

This is a very simple configuration, note that it does not contain any prefix delegation since assigning addresses is done via DHCP. IPv6 DNS resolver configuration is performed via the RDNSS option.

Tada, 20/20 at ipv6-test.com

DNS

Next, i headed to my DNS setup. My colleague Bert recently held a great presentation that outlined how important proper DNS is for a good online experience. Virtually any service nowadays depends on DNS and websites use dozens of lookups that suffer from bad DNS performance. Google introduced their Public DNS as a cure-all solution and almost 500M users re-configured their default DNS to point to Google or use it as a default with Android. That service is blazing fast and reliable, no discussion about that. However, Google already knows all your searches - using their DNS also exposes all your other online activity to them, without even using a Google account.

So i gave namebench a spin and tested several DNS servers close to my IPv4 and IPv6 exit points. The results were quite interesting, especially when it comes to speed. The gap between fast and slow services was about 40%. The default IPv4 DNS of my ISP was already good and i kept it as secondary DNS. I added the quickest one as primary and a backup DNS within a different state as tertiary. My local clients get the IPv4/IPv6 of the router as DNS, which acts as a forwarder. I ended with the following servers which were quick, uncensored and reliable:

IPv4

  • 193.189.250.100 (Telefonica, Kassel)
  • 213.191.74.18 (Hansenet, Hamburg)
  • 213.73.91.35 (CCC, Berlin)

IPv6

  • 2001:470:20::2 (HE, Fremont)
  • 2001:418:3ff::53 (NTT, Denver)

Imagine the following task:

“Use a script to open https://www.example.com/ in Private Browsing mode.”

Well, this does not seem to be a overly sophisticated requirement and should be a rather easy task to implement. For most browsers, that’s absolutely true.

Microsoft Internet Explorer

1
iexplore.exe -private https://www.example.com/

Google Chrome

1
chrome -incognito https://www.example.com/

Mozilla Firefox

1
firefox -private-window https://www.example.com/

Apple Safari

  1. Go to “System Preferences -> Security & Privacy -> Privacy -> Accessibility”
  2. Click “Add”, select “Applications -> Utilities -> Terminal”
  3. Just execute this simple command…
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
osascript -e 'tell application "Safari" to activate
tell application "Safari"
close window 1
end tell

tell application "System Events"
tell process "Safari"
tell menu bar 1
tell menu bar item "File"
tell menu "File"
click menu item "New Private Window"
end tell
end tell
end tell
end tell
end tell

tell application "Safari" to set the URL of the front document to "https://www.example.com/"'

In a nutshell, this AppleScript launches Safari and closes the first browser window, which is non-private, right away. Then it uses OSX accessibility features to click through the menu bar to launch a Private Window. Finally it sets the address of the focused window to the desired URL.

Well… praise Apple for making intuitive software that just works. Magical. </rant>

A while ago, we decided that our living room is occupied by too many remotes. While this is a common issue when building a HDTV/BD/ATV/Audio rack, the solution is plain and simple: Get a universal remote. I decided for a Logitech Harmony Ultimate which does have quite some track record in terms of building remotes. There are other manufacturers that build such remotes as well and please feel free to evaluate them. I sticked with Logitech since i already own(ed) some of their stuff and it worked well. Also, their system is widely used and a de-facto standard. I like the idea of a more-or-less simple remote and a “invisible” hub quite a lot.

The good

Not going into details or writing some product review, however the Harmony Ultimate remote and the hub are very well built. The touch screen is okay’ish regarding responsiveness and all buttons have a nice feedback and illumination. The tilt sensor is a very good idea to wake up the remote on movement. Overall i’d say hardware is a B+ since i dislike the idea of the LCD splitting the buttons and the hub is perhaps a bit larger than required. I also got a pair of Harmony Precision IR cables to directly attach them to the devices rather than placing the IR blasters. These guys take care of the rack which is now behind closed doors while the hub manages the HDTV and speaker setup.

The bad

Logitech opted to go a all-online configuration. Generally that’s a good idea to spare fiddling around with USB cables and software installation. The hub connects via WiFi and fetches the current configuration when being told to do so. Same applies to the Android/iOS remote app.

What i really dislike about this choice is, that Logitech requires a browser-plugin to be installed. Hell, the 90ies are over! Technically it does not seem to be more than a Browser-USB bridge for initial setup of the Hub and the remote. Apart from that, Logitech opted for a user configuration frontend built with Silverlight… While arguing on Silverlight/Flash/HTML/Java is quite exhausting, i simply state that i don’t like the implementation. It works without larger glitches, but feels quite slow and clunky.

Some very basic features are missing from the Harmony Ultimate: PIN lock of the remote and multi-user support. So in case you don’t want your kids to use the Harmony to play around, you have to hide it. A simple 4-digit PIN lock should not be rocket-science, eh? Even worse than that, you do have one online account at myharmony.com which is bound to your remote. Meaning that if you don’t want to share the credentials with your husband/wife/kids. On top, you cannot change the accounts mail address and your configuration cannot be exported. I’d really hope that Logitech accepts that there is more than one person within a typical household which wants to configure the remote.

The ugly

Apart from controlling some TV/Audio devices, i use a set of remote power outlets (Intertechno IT-1500) to shut down any standby activity of my TV/Audio rack. In order to do so, it’s mandatory to have a piece of hardware that’s compatible with your Harmony and your RF outlets. Harmony Ultimate does use IR and Bluetooth, no RF. In my case, i opted for the LightManager Pro+ which can handle my outlets and is compatible to most RF outlets offered in EU/Germany. This nice piece of hardware gets configured separately to communicate with your outlets. In the end, it offers 254 slots for RF devices and can assign several commands to each (on/off/toggle/dim) as well as time or even temperature based actions.

Integrating the LightManager to your list of devices and activities is quite straight forward since Logitech already knows the IR codes. But to my surprise, i simply could not do anything afterwards! I could customize my Activity and add a command for the LightManager, but it kept being added to the bottom slot of my activities command list. Of course i’d need it in #1 slot since all subsequent devices rely on power supply. Logitech states, that additional commands to an existing Activity must be added to the bottom of the list and that there is no way to re-order them. Damn!

After some quite friendly but not very productive calls to the Logitech support, i did take some time to work around the issue. As it turned out, there is a way!

Turning on outlets before turning on devices

I assume that LightManager is already configured and your remote outlet takes “L001” as “on” and “L002” as “off”.

First, go to the crappy Silverlight abomination which calls itself “MyHarmony” and login. Chose your remote and select “Devices”. Now select your LightManager and click “Change Device Settings” and go for “Power settings” and chose “I want to keep this device on… turn it off when i press the Off button”. This will help saving a lot of time when switching Activities that rely on the same power outlet. At the next step, tell Myharmony, that you use two different buttons for power on and power off. Finally, Myharmony lets you assign power on and power off actions. There, you assign command “L001” for power on and “L002” for power off. When you now add the LightManager to an activity, you can put your LightManager “power on” to slot #1. The downside is, that you need one LightManager “device” for each outlet you want to power on/off, but Harmony Ultimate can handle 15 so that should work out for post people.

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×