CAD of Assembled Disk-array Photograph of Disk-array from Front Photograph of Disk-array from Rear

Memini: a small disk-array


I've got a lot of unwatched video-files, & being tighter than Spandex I'm obviously not going to pay for a streaming-service.


An external SSD:

Initially I thought an external SSD would suffice, so I bought a 1 TB Adata SE800. It

  • was fast.

  • was portable.

  • was impact-resistant to MIL-STD-810G.

  • was water-resistant to IP68.

  • had many positive reviews.

  • had a comforting three-year guarantee.

  • … had only three days to live.

I was careful (mounting read-only where possible, unmounting before removing, & scrubbing the file-system), but the failure wasn't because of file-system corruption, but because the SSD's hardware or firmware failed.

I have the (perhaps cynical) impression that the firmware-complexity required for MLC SSDs, demands a rigor of testing which manufacturers are unwilling to embrace until a requirement has been observed in the wild. Currently Sandisk Extreme SSDs are unilaterally wiping themselves. Fortunately, the statistics suggest that these incidents are atypical. According to Backblaze, the lifetime AFR for SSDs has now dropped beneath HDDs, but a low AFR is no substitute for a backup.

I hadn't backed-up my data, because I considered a second SSD was too expensive. Yes, I got my money back … but not the data; don't put your faith in the law of Conservation of Information (the process may be theoretically reversible but incredibly unlikely).

CAVEAT: it's possible that an identical backup-device, with an identical work-load, would have suffered the same fate.

Cloud Storage:
  • The initial cost is likely to be very attractive, but it's a subscription-model (a member of the set including Government-debt & CPI which refutes the theory that What goes up must come down) … which snaps me back to Spandex.
    N.B.: one can pay a single lifetime-storage fee, but then it doesn't provide even the ghostly illusion of being cheap … & they don't have to worry about tedious things like service.

  • I'm of the opinion that the day your data migrated to someone else's cloud, is the same day it became their data. Whilst it may be just video-files today, missions tend to creep. So, you'd probably want to start with a Zero-knowledge cloud-storage service.

  • It requires an internet-connection to be available.

A RAID using large HDDs:

One could just use a small number of large HDDs (perhaps in a simple two-disk RAID-1 mirror), but rebuilding such a RAID after failure, raises the question of the HDD's rate of UREs & therefore how likely catastrophic failure is. A typical consumer HDD may have a UBE of one bit in ≈1014 (enterprise HDDs have lower quoted failure-rates). A 500 GB HDD holds 4×1012 bits; so if I read the whole HDD 25 times, I'd expect one unrecoverable error. Whilst HDD-capacity has increased ≈100-fold per decade, the UBE hasn't fallen, so if I build the array from 5 TB HDDs, I'd be asking for trouble.

OpenMediaVault running on a Raspberry Pi:

A Raspberry Pi neither natively provides any SATA-ports (though one can add a four SATA-port HAT), nor can it reliably power even one HDD over USB, nor AFAIK does it receive a patch-stream either for the OS-image or the application.


I've looked at commercial NAS solutions to my storage-problems for a long time & it never seems to hit the nail squarely on the head; rather too close to the thumb for comfort.

  • They're expensive; e.g. even an entry-level Synology DiskStation DS223 two-bay NAS currently costs ≈289 GBP.

  • Consumer NAS have SPOFs at both the power-supply (though one could also stump-up for a UPS) & the hardware RAID-controller (Enterprise NAS may have dual controllers or operate as a cluster).

  • The file-system is often opaque, leading me to suspect that should there be a hardware-failure unrelated to the HDDs, I'd have little choice but to stump-up for another NAS (if it's still made & even though it's demonstrably unreliable), from the same manufacturer (if they still exist), & populate it with the HDDs from the failed NAS. Even if the file-system is known, it may be proprietary or modified, making it difficult to read from GNU/Linux (my OS of choice).

  • I prefer the 2.5" form-factor to the older 3.5" ones; they're lower power, quieter, smaller (obviously) & reliable. For equal angular speed, the performance is typically similar between form-factors. Whilst the larger form-factor are available in larger capacities, it transpires that this is undesirable.

    I'd like to use the bag of random 2.5" HDDs that I've scavenged (from old laptops & set-top boxes). Whilst some NAS may accept the smaller form-factor (using an adapter), they still cater for 3.5", & are consequently physically too large for the precious free space in my working environment, & need a power-supply specified for the larger form-factor HDDs. There are a few NAS or disk-arrays dedicated to the smaller form-factor, but they have relatively few bays, & frankly, they're NASty.

  • A NAS would be visible on the LAN, but since the requirement is to merely make some personal files available, I don't really need it.


One could build a dedicated PC using a motherboard with perhaps four SATA-ports, & use an extension-card to add another four. Then install TrueNAS on it to provide a robust solution. My initial investigation suggests that this is not a cheap solution, & since it only supports x86-64 it would be power-hungry (Electricity is currently, thanks to Putin's war, obscenely expensive in the UK).

My Solution

  • Lacking any requirement for a networked solution, I've chosen the simple solution of merely connecting each of many small HDDs, via a bus-powered SATA-USB bridge, to a 10-port USB-hub.

  • The USB-hub conforms to the USB-3.0 specification, so can be relied on to deliver 900 mA per socket (cf. 500 mA for USB-2.0). This is just enough for a 2.5" HDD.

    The power-supply is a generous 60 W, to account for all HDDs simultaneously demanding the maximum 5.5 W power.

    CAVEAT: the USB-3.0 specification limits a socket to just 4.5 W, so they may spin-up rather slowly.

  • The ports are individually switchable, to permit the HDDs to be started sequentially without re-inserting USB-plugs (the wrong way round … twice).


There are many cables, so some container is required to make the assembly robust & portable, & for that purpose FreeCAD was used to design a 3-D printable enclosure to support the parts.

This is possible using FFF, since the only unsupported spans are the 40 horizontal holes used to mount the HDDs. These were piloted (as holes of radius 0.25 mm) for printing, then subsequently reamed for M3 bolts using a pin-vice.

The shape has been sculpted to reduce the volume of plastic (though PLA is biodegradable), but this is largely cosmetic because (to reduce cost) it is infilled anyway.

  • The USB-hub is orientated to make its buttons accessible & its lights visible.


    The SATA-ports are parallel to the USB-ports to avoid twisting the cable through 90°.

    The SATA-ports don't directly face the USB-ports, to reduce sensitivity to the cable-length.


    The HDD-enclosure is orientated to make the HDD-securing bolts accessible.

    The HDD-enclosure is left open @ the upper & lower faces, to facilitate HDD-replacement.

    The HDDs are orientated vertically, to promote convection.

    The 2.5" form-factor allows for a variety of drive-thicknesses, but the space required between HDDs, is actually governed by the 11 mm wide SATA-USB bridges.

  • Each cable must span the difference in height of the SATA-connector & the USB-hub, so I've chose 32 cm long ones.

Vibration & Noise:

2.5" HDDs are very quiet (24 dBA), but I'm planning to use ten of them, which since they're incoherent sources would have a combined sound 1 B louder, i.e. 34 dBA. Whilst this is still quiet, it's about more than pollution of the environment; HDDs (which operate within very fine tolerances) both produce & suffer from noise … karma. Each HDD vibrates (not primarily the harmonics of its 120 Hz rotational frequency, but from a combination of transverse flutter of the disk, spindle-motor bearings, clatter of the actuator, turbulence of the interior gas), which then both transmits though the mounts, & resonates the flat aluminium sides of the case from which it propagates through the air as noise. Adjacent HDDs can then receive this noise through the reverse mechanisms, resulting in reduced performance. This isn't an issue in a laptop where there's typically only one, but it can't be ignored in a disk-array.

CAVEAT: the sound from an HDD is also a useful diagnostic tool, since it typically changes just before failure.

Merely enclosing the disk-array in sound-insulation would be counter-productive; firstly, the HDDs are enclosed by this insulation, so their performance still suffers even if you don't; secondly, it would typically result in thermal insulation, necessitating a cooling-fan exposed to a source of cool air, through which it would emit noise (though of a different frequency).
N.B. enclosing the HDDs would also frustrate replacement.

Each HDD is individually mounted on four horizontal rubber anti-vibration bobbins, to reduce transmission of vibration between HDDs, through the mounts. The total width of an HDD (from an arbitrary manufacturer) plus a pair of rubber anti-vibration bobbins made to rather vague tolerances, requires a generous tolerance in the interior width of the HDD-enclosure. Where by chance this tolerance isn't required, washers are used to pad any excess space, to avoid stretching opposing pairs of anti-vibration bobbins as each HDD is bolted into place.

The whole disk-array is mounted on vertical rubber anti-vibration bobbins & also rubber feet, to reduce transmission of vibration both from the surface on which it stands, & to the surface (where it may manifest as noise). Whilst one could have merely placed the structure on an anti-vibration pad (or a much cheaper rectangle of cardboard), use of feet elevates the structure allowing water-spills to flow beneath the IDE.

The interior surface of the HDD-enclosure is covered by dimples of 2.3 mm radius & 2 mm depth, somewhat like an anechoic chamber. The optimal height of such dimples to cancel reflected sound would be a quarter of its wavelength, whereas the wavelength emitted from these HDDs is probably an order of magnitude longer … so these are unlikely to be effective.
N.B.: one could alternate these dimples with pyramids, thus doubling the effective height & optimising them for cancellation of 21 kHz; hmmph, still too high.

CAVEAT: I currently lack the means to quantify the effect of these measures.

  • I've chosen to use software-RAID because it lacks a SPOF … & it's cheap.

    I've chosen to implement it using Btrfs Btrfs: a file-system with CoW, because it:

    • Performs rigorous check-summing.

    • Efficiently uses HDDs spanning a wide range of capacities (though my HDDs are actually all the same size).

    • Facilitates on-line changes to the RAID-level.

    • Supports differences between the RAID-level of data & meta-data.

  • CAVEAT: Btrfs RAID-levels have significant differences to their conventional namesakes.


    Whilst Btrfs RAID-1 (a simple mirror) over two HDDs provides good fault-tolerance, it's no more reliable than Btrfs RAID-10 over larger groups of HDDs (both can only tolerate one failure of an arbitrary HDD, & the probability of such an event increases with the number of HDDs). N.B.: this contrasts with conventional RAID-10 (a RAID-0 stripe over RAID-1 pairs) which permits one HDD-failure per RAID-1 pair. Given that Btrfs RAID-10 is faster than Btrfs RAID-1, I've chosen to use it for data.


    To further increase resilience, the file-system's meta-data is stored as Btrfs RAID-1c3 (too wasteful for data, but the meta-data is smaller), thus permitting two copies of meta-data to fail.

    I've chosen to partition the available space into two independent file-systems each over five HDDs, each permitting the failure of one arbitrary HDD. RAID-10 requires a minimum of four HDDs, so by including a fifth, one has the option of rebalancing the file-system into the remainder, after the failure of one & before sourcing a replacement.

    $ sudo mkfs.btrfs --data='raid10' --metadata='raid1c3' /dev/sd[b-f];	# Create a file-system spanning the specified devices (CAVEAT: amend as appropriate).

    Should these choices prove suboptimal, then the file-system can be easily rebalanced.

    CAVEAT: you may want to specify a more rigorous check-sum algorithm than the default crc32c when creating the file-system, because subsequent changes aren't currently possible.


Part Model Speed
Theoretical Empirical
HDD Hitachi Travelstar Z7K500
  • 7200 RPM
  • 6 Gb/s (SATA-interface)
100 MB/s in sequential read
Western Digital Black WD5000LPSX
Toshiba MQ01ABF050
  • 5400 RPM
  • 6 Gb/s (SATA-interface)
SATA-USB bridge JMicron JMS578-A 5 Gb/s (USB 3.1 Gen 1 & SATA interfaces)
USB hub MZX 5 Gb/s (USB 3.0 Superspeed)

Based on this, the SATA-USB bridge shouldn't throttle the data from the single HDD to which it is connected. The USB-hub can cope with reading simultaneously from six HDDs, but in Btrfs RAID-10 one might expect that no more than two HDDs will simultaneously be reading data. To measure the performance in sequential read, a short GNU Bash-script was used:

$ FILES=$(find . -type f | head -10) && { du -bc $FILES; time cat $FILES >/dev/null; }

From which ≈162 MB/s was measured; i.e. nearly double the speed of an isolated HDD.

State W
Idle 9
Reading 20
Writing 25
Reading & Writing 23

These results initially make the power-supply look over-specified, but they refer to just a single IO-request rather than multiple simultaneous requests, & they reflect the chosen RAID-level (RAID-6 would cause more intense disk-activity).


Even while scrubbing the file-system, the temperature remained well within tolerance.

N.B.: the rate of heat-generation is limited by power-consumption, & some of that budget generates noise.

Component Type # Size GBP
3D-printed using FFF
Original Prusa i3 Mk3S
206 mm
186 mm
102 mm
Anti-vibration Bobbin
  • Rubber
  • Female-female
7.5 mm
19 mm
  • Rubber
  • Conical
(4.5 to 6mm
9 mm
Bolt Hex-socket M3 × 8 mm N/A2
Nut M3
  • 2.5"
  • 500 GB
120.001 2
Bridge SATA to USB-3.1 Gen 1 Standard, Type-A 39.67
Bolt Hexalobular 40 M3 × 6 mm 1.99
Anti-vibration bobbin
  • Rubber
  • Male-female
3 mm
6.75 mm
Washer M3 N/A2
USB-hub USB 3.0 1
  • 10-port
  • 12 V
  • 60 W
Bolt Hex-socket 4 M3 × 30 mm N/A2
Washer M3
Total 241.873


  1. The current () price for unused HDDs on EBay.

  2. Much of the bill is accounted for by parts which I already owned, & for which I had otherwise little use.

  3. For comparison, a Synology DiskStation DS1522+ five-bay NAS (which can run Btrfs RAID-10), currently costs ≈715 GBP (without the HDDs).
    One would either need two of these, or accept the increased risk of using 1 TB HDDs.


The use of SATA-USB bridges frustrates diagnosis via smartctl (which sees the bridge rather than the HDD).

The solution for my JMicron JMS578-A bridges, was to modify the basic command to

$ sudo smartctl --device=sat --all /dev/sd[b-k] # /dev/sda contains my OS.

The design can only accommodate a USB-hub of specific dimensions; flexibility would be better.

No practicable solution currently presents itself.

The mounting-holes in the 2.5" form-factor HDD, are slightly further from the SATA-connector end (my CAD-model erroneously assumes symmetry).

Lower the HDD mounting-holes in the CAD.

Each HDD is secured by four bolts, thwarting replacement & frustrating inspection of the label on the side.

No practicable solution currently presents itself.

Noise propagated through the air from the flat aluminium sides of an HDD's case, can be picked-up by the case of adjacent HDDs.

If the HDDs are to remain parallel then interleaved sound-insulation may be required, which would significantly increase the width of the disk-array; I'm not currently convinced of the requirement.

The four vertical anti-vibration bobbins were merely scavenged, rather than carefully selected for the use-case.

Select anti-vibration bobbins appropriate for the ≈625 g weight of the fully populated disk-array.