Fixing WiFi roaming in macOS

I recently extended my WiFi network coverage with a TP-Link EAP225 v3 access point in the garage, connected via an ethernet cable to the router in my office. The garage and my office are at opposite corners of the house, so this combination should provide excellent coverage overall.

This blog post is reminder to myself about the problems that can prevent devices from roaming between the two WiFi access points based on whichever has a stronger signal.

macOS in particular had a hard time switching to the stronger signal, and the cause turned out to be a slight discrepancy between the security protocols supported by the two access points. Despite having the same SSID and password, if the security protocols are not identical, the implicit roaming support in macOS would not function properly.

The most helpful tool to diagnose this issue was the command-line airport utility, which is located in /System/Library/PrivateFrameworks/Apple80211.framework/Resources

Running “airport -s” from this location will dump out all the SSIDs found, as well as their security settings. Any discrepancies need to be fixed…in my case the EAP225 was configured to support an extra protocol (TKIP) in addition to EAS. TKIP is less secure anyway, so disabling it not only solved the roaming problem but probably increased security as well.


Revolution in low-power NAS

Even though I’m a Mac user at home, I hadn’t been paying attention to a revolutionary step forward in lowering power consumption.

I’m speaking of the “Wake on Demand” feature in Snow Leopard and Airport routers.  This is a combination of router + OS trickery that has great potential for allowing machines to go to sleep (i.e. 1w – 3w power usage), while the router acts as a proxy for the sleeping machines and acts as a proxy for the sleeping server.  If another machine attempts to access the server, the router will send the magic WakeOnLan packet (or WakeOnWifi equivalent) to the server so that it can handle the request.

I haven’t figured out how exactly it works, but most attempts to connect to a sleeping Mac via either SSH connections, VNC, SMB, or AFP will trigger the wakeup.  The underlying infrastructure is Bonjour/ZeroConf based, and I think the new router “sleep proxy” could easily be integrated into OpenWRT or other firmware, but more interesting is how Apple might have had to tweak their OS to get Bonjour involved in things like opening an SSH connection to another machine.  For example, I would have thought vanilla OpenSSH would use raw sockets instead of a Bonjour API, so I wonder whether Apple has had to modify all of its common client software packages to play well with this infrastructure?

If this can be spread to other routers and NAS operating systems, it would bypass the need to optimize idle power consumption with exotic Atom-based motherboard, etc.  A low-cost Core i3 with mainstream H55 motherboard already beats or gets very close to an Atom board in terms of idle consumption, but actually letting your server go to sleep is a order of magnitude improvement over either.

More info here:

NAS OpenSolaris ZFS

Decisions, decisions…

So in my idle moments, I wonder whether it is time to upgrade my stable but not quite satisfying OpenSolaris NAS box from its current Nevada Build 76.

For some reason, I just downloaded SXCE Build 91 even though I had only ever contemplated moving over to OpenSolaris 2008.05.  The only real benefit of SXCE 91 would be that I know I can continue to install and run it in headless mode (i.e. serial console), since I’ve already been through that with 76.  Plus maybe a few bug fixes since 2008.05 was rolled out.  Otherwise it seems a dead-end, and the upgrade will be painful since I’m not familiar setup for Live Upgrade (Live Update?).

The Linux-isms in OpenSolaris, as well as a ZFS root file system (since I’m still no where near booting from flash yet) would be a real boon.  I’m just not sure whether I’ll have to install a graphics card in the box just to do the install, then disable X and GDM and/or tweak the grub menu as I did earlier:

$ diff menu.lst menu.lst.orig
< timeout 5

> timeout 10
< serial –unit=0 –speed=9600
< terminal serial

> # serial –unit=0 –speed=9600
> # terminal serial
< # splashimage /boot/grub/splash.xpm.gz

> splashimage /boot/grub/splash.xpm.gz
< title Serial Solaris Express Community Edition snv_76 X86
< kernel$ /platform/i86pc/kernel/$ISADIR/unix -B console=ttya

> title Solaris Express Community Edition snv_76 X86
> kernel$ /platform/i86pc/kernel/$ISADIR/unix


Either way, I’m looking forward to the integrated CIFS server, since my Samba config has never been satisfactory and I’m down to pure SSH / SCP / SFTP access at the moment, which is a pain at times, although WinSCP and CyberDuck make access from Windows and Mac fairly nice.



NAS OpenSolaris ZFS

Great NAS with ZFS resource

I just ran across Simon’s blog, which recently covered many of the ZFS topics for a home NAS that I had been interested in.

Definitely check it out:

NAS OpenSolaris ZFS

Indiana Preview 2: Installing to a 4GB CF card

Tim Foster of Sun Ireland wrote up this concise summary of installing the latest OpenSolaris Indiana Preview 2 on a CompactFlash card:

Hi Brian,

On Mon, 2008-02-18 at 13:44 +0000, Brian Nitz wrote:

> I wonder if you can post or blog the steps you took to accomplish this 

> and what exactly you accomplished (I assume it isn’t specific to the eePC?)

Sure, it was pretty straightforward actually, providing your machine is able to boot from the target device [ these steps won’t provide bios support where there was none before! ]

Here’s what you need in order to boot from an alternate pool, “tank” in the steps below, all done as the root user.

1. Install Indiana on a big disk, giving you a pool called “rpool” – I used an external USB hard disk.

2. Reboot the system as normal, from rpool

3. Take a recursive snapshot of this:

    # zfs snapshot -r rpool@snap

4. Create a pool on the device you want to boot from

    # zpool create tank c8t0d0s0

Obviously your device will differ.

Note that this needs to be a SMI labeled disk: run “format -e ” and use fdisk to determine that you definitely have a Solaris partition, and use ‘l’ within format to label the disk.

If you want to swap to that disk, remember to assign a slice for swap [ given that I only had a 4gb SD card, but 2gb of ram, I didn’t really need swap, and couldn’t afford to dedicate space for swap/dump on the target disk]

5. Send the snapshot to the target pool

    # zfs send -R rpool@snap | zfs recv -Fd tank

You’ll probably get warnings about not being able to mount /export/home or /opt – given that these are already mounted from rpool.

If you want the target pool to have compressed filesystems, you need to set this on the source filesystems *before* issuing the zfs send command, (zfs filesystem properties get copied over, along with the sendstream) so “zfs set compression=on rpool/ROOT/preview2” – obviously that’ll only affect new blocks written to rpool, but doing so will cause all new blocks on tank to get compressed when you send those snap shotstreams over to the other pool.

6. Fix up etc/vfstab entries, set the bootfs property on tank and install grub

    # mkdir /tmp/a

    # mount -F zfs tank/ROOT/preview2 /tmp/a

    # vi /tmp/a/etc/vfstab(edit swap and the entry for “/”)

    # zpool set bootfs=tank/ROOT/preview2 tank

    # installgrub /tmp/a/boot/grub/stage1 /tmp/a/boot/grub/stage2 /dev/rdsk/c8t0d0s0

    # bootadm update-archive -R /tmp/a

    # reboot

7. Set your bios to boot from your new target device, or remember to jump into the boot-device selection dialog from your bios screen.

I think that’s all I did – anyone else, feel free to point out any steps I’ve missed!





NAS OpenSolaris ZFS

Trouble with Samba

My first server was based on Nevada 56, which didn’t include Samba. I ended up compiling v3.0.22 it by hand, setting up a SMF configuration to boot it up, and I was relatively satisfied. I am by no means a smb.conf expert, but it worked well enough for my simple needs.

The problem was that performance was spotty, and never very maxed out at a very fast rate. I wasn’t sure if the system’s overall ZFS performance was to blame, or something with Samba.

Well, after updating to a faster CPU and motherboard, and also moving to the Samba 3.0.25 shipping as part of Nevada 76, I have run into the same performance problems. It can perform well for short intervals, but it is extremely bursty and often stalls altogether. Running “zpool iostat 5” would often show 0 write attempts in a 5 second period.

The main clients of the NAS server in my house are Mac OS X, and switching to using SCP to backup large datasets resulted in a 10x performance increase, and that obviously includes SSH overhead.

At this point, I may investigate NFS and automounter support in MacOS X, but I also have a Windows machine and perhaps future clients that may only talk SMB/CIFS.

The new built-in CIFS support starting in Nevada 77 is probably going to mature into a really nice solution, but in the short-term I am not inclined to do another upgrade and I also see no good reason why Samba shouldn’t also perform reasonably well.

Any hints out there?

NAS OpenSolaris ZFS

Hardware History

This is a brief history of my hardware selection for my OpenSolaris NAS server. The case, power-supply and choice of disks have remained constant, but I have gone through 3 motherboards in total since April 2006, and I will list my regrets for each.

The first motherboard I tried to use was the MSI 915 GM Speedster, which is not one of their consumer boards but instead a part of their server/workstation lineup. It is a microATX board that uses the mobile 915GM chipset, and therefore supports mobile CPUs like the Pentium-M and Celeron-M lines. I scrounged a few of these off eBay for cheap. Aside from its price, this was a reasonable motherboard at the time, but early versions of Solaris and OpenSolaris didn’t seem to support its dual Marvell gigabit ethernet chips at all.

Now, nearly 18 months later, support for these ethernet chips is probably already baked in, but I still cannot recommend this board as MSI has stopped supporting it. My first board was also DOA, and there is a rather serious error in the documentation of critical jumper settings, which may have contributed to its failure.

While I was waiting for the 915GM to be replaced, I acquired a MSI 945GT Speedster, which is somewhat obviously based on the more modern 945GM chipset. This mATX board accepted Core Solo and Core Duo CPUs. It is still sold, but I have not been able to determine if they’ve added support for Core 2 Duo chips. In conjunction with a Core Solo T1300 from Craigslist, I ran Solaris Nevada build 56 on it until about 2 weeks ago. The main problem with this system was that it was very picky about the brand of DDR2 memory that it would support. It took me a few tries to get some that would work. This machine would idle at 60 watts when the RAID drivers were all spun down, which isn’t so great but it was relatively low.

Recently, I had decided to see if the latest G0 stepping of the standard Core 2 Duo chips (E2160) could be brought down to similar power levels using a more standard board, since in the C3 state they draw very little power. I picked the Gigabyte GA-P35-DS3R, mainly because it had 8 SATA ports instead of the 4 I had previously. The other angle I was playing with was to run without a video card. Fortunately, this model has a serial port and boots happily without a video card in the PCIe slot, and OpenSolaris can be made to run fairly easily over a serial console. Prior to installing this board, I was able to get idle power draw down to 46 watts using Windows and Ubuntu 7.10. Nevada 76 was more like 53 watts, but given it doesn’t yet have all the tickless kernel and other Intel power saving tweaks of the latest 2.6 Linux kernels (see, I was pretty happy.

But when I mounted it in place of the 945GT, I found that the power consumption was about 10 watts higher. This was apparently due to differences between the two power supplies that I used, even though the first set of numbers were from a 380w Seasonic rather than the 330w Seasonic that is in the NAS case. Normally a higher-powered supply would not be in it’s 80% efficiency range until the system drew more power, but in this case it was more efficient. This is still a mystery, and may be due to just differences in manufacturing batches. And unfortunately, the 380w supply would could not successfully boot the system once all the RAID disks were attached, which is also somewhat mysterious. So I stayed with the 330w model and the slightly higher power consumption.

The bottom line is that today’s desktop CPUs render the old mobile-chip-on-a-desktop approach somewhat useless. In particular, AMD’s BE line of 45w CPUs might be even better than the Intel line. A lot of the variables lie in the motherboard chipset and power-supply as well, which is probably why my old Terastation can draw 40 watts maximum with 4 drives spun up … its PowerPC system is probably taking 10 watts or less.

Let me describe some of the other aspects of the system:

I’m using ZFS’s RAID-Z setup across 4 Western Digital 400GB RE2 drives. These are supposedly more industrial strength than the consumer drives. So far I have no complaints, although they do run a bit hot.

BTW, I was operating under the mistaken assumption that with 8 SATA ports, I could add 4 more drives to this array to grow its capacity, but I recently learned that you cannot add drives to a RAID-Z array, only add more drives to the same pool (perhaps 2 in a mirrored setup, or 4 in another RAID-Z setup). If I want to increase the capacity of the existing array, I would have to replace drives one at a time with larger capacity drives, and wait for it to rebuild the contents of the new disk. Repeat 3 times and you’re done.

There is a fifth disk in this system, an IDE drive that contains the OpenSolaris operating system. My ultimate goal is to boot the OS from a USB memory stick, just to end up with fewer moving parts and hopefully less power draw. There has been quite a bit of progress in this area by various folks, particularly to support having a Live CD, but I haven’t invested the time to learn how to trim down the set of packages. My model for this would be FreeNAS, a FreeBSD distribution for NAS appliances designed to run off a USB memory stick, with the trick being to not wear out the NAND flash by loading everything into a RAM disk and only mounting the USB stick read-write to update critical configuration files (if you change settings via the Web admin console).

The final component in the system was the Antec P180 case, and the aforementioned Seasonic 330w power supply. Both were picked because of their low-noise characteristics, and I have to say the only time I hear the server is when the disks first spin up when I access the RAID array.

NAS OpenSolaris

Goals for my NAS Project

My basic objective was to create a NAS appliance on par with a Buffalo Terastation, meaning the following goals:

  1. low power (my Terastation runs at 40 watts on average)
  2. low noise (my Terastation’s fan is actually quite noisy)
  3. RAID-5 storage
  4. UPS support (my Terastation doesn’t support USB.  It only supports archaic RS-232 communication with the UPS).
  5. NTP support to keep server clock closely synced to clients.

The additional desires were:

  1. Easier expansion of total storage by adding additional drives.
  2. Support for NFS or RSYNC protocols.
  3. Perhaps hot-swap failover of drives.

The next post will discuss how OpenSolaris stacks up to this list of requirements, and my initial hardware choices.

OpenSolaris ZFS

First post

I’ve just set up this blog to record my technical notes as I experiment with OpenSolaris, ZFS, and quiet, low-power home file servers.