LX521 with software DSP

I’ve just finished building a pair of Linkwitz Labs’ LX521 loudspeakers.  They’re everything I could want: 4-way, dipole radiation, with active crossover and eq.  They sound amazing.  Previously the best speakers I had heard were the Pluto 2.1.

What I’ve done differently is to implement the xover/eq with a digital processor, which runs in software on a small pc.  I did this using only open-source, free (as in beer) software, plus a bit of code I wrote myself.

Disclaimer: If you replicate what I’ve done here, you will not have LX521 loudspeakers.  A key component of the LX521 as designed by Siegfried Linkwitz is its Analog Signal Processor (ASP).  I’ve tried to realize the ASP transfer functions in software dsp, and I’m satisfied with the result, but this is a significant departure from Linkwitz’s design.  So we can’t call it LX521.

Summary

  • magnitude response computer-optimized to track the ASP within 0.2dB
  • phase response similar to the ASP; somewhat less total variation in group delay
  • negligible distortion
  • dynamic range and signal-to-noise ratio optimized, with gain structure determined by a statistical analysis of actual program material
  • runs on a small pc that can double as a media server
  • dsp filters keep 53-bit precision through the processing chain; output to a multi-channel sound card is re-quantized to 16- or 24-bit with noise-shaped dither

Why Digital?

There’s really no compelling reason to implement the ASP functions in dsp.  The ASP works, so why bother?

  • Time and money.  The ASP runs about $700 in parts.  Assembly and testing is perhaps 30 hours and error-prone.  My software dsp runs on a small pc I already have; eventually I’ll put it on a dedicated fanless pc that can be had new for $200.  Instead of building circuits, I’ve invested 30 hours in learning a lot about dsp.
  • Flexibility.  Modifying the ASP is a delicate, time-consuming operation with a hot soldering iron; software changes are fast and clean.  Also, building a second ASP means another 30 hours’ assembly time; duplicating my software setup is a matter of copying the bits to another machine.
  • Integration.  My music all lives on a computer: I haven’t played a CD in 8 years.  Why not use the media server to perform the crossover and eq duties as well?
  • Ease.  I’m just not that great at soldering and testing electronic equipment.  Software I can do.  Scientific computing is my thing.

I don’t expect to make something better than the analog processor, just something that isn’t any worse.  I’m mostly interested in how I can use this for designing other stuff.

There is some well-founded scepticism about using dsp for active crossover and eq.  To convince myself that what I’ve done is on par with Linkwitz’s ASP, I’ve made some careful measurements.  Read on for the gory details.  I’m pretty happy with the results, but I might have missed something.  If you spot something amiss, please let me know.

How It Works

I built my software dsp on top of ecasound, which runs on a linux pc.  It’s installed on a USB flash drive, so one can easily run it on any pc that has a multi-channel soundcard — no linux knowledge required.

I’ve set up ecasound to decode a digital audio file (mostly flac in my case), apply a cascade of digital (IIR) filters, and assign outputs to appropriate channels of a multi-channel sound card.  The filters themselves are standard digital biquads, implemented as ladspa plugins written in C.  The filters are of the same type and arrangement as in the ASP block diagram found in the LX521 documentation.

To give a simplified example to illustrate how this is done, here’s an ecasound command that implements a 4th-order Linkwitz-Riley 2-way crossover at 1kHz, with a -6dB notch (Q=0.6) at 300Hz added to the woofer, and a 4dB high shelf (Q=0.71) at 12000Hz on the tweeter:

ecasound -z:mixmode,sum \
         -a:all -i:mysong.flac -f:16,4,44100 -o:alsa,surround40:Device \
         -a:woofer -efl:1000 -efl:1000 -eli:9021,-6,300,0.6 -chorder:1,2,0,0 \
         -a:tweeter -efh:1000 -efh:1000 -eli:9025,4,12000,0.71 -chorder:0,0,1,2

The output here is raw 4-channel PCM (woofer L/R are on channels 1 and 2; tweeter L/R appear on channels 3 and 4) which gets sent directly to the soundcard named “Device”.  The “-eli:9021,-6,300,0.6” runs my ladspa plugin that implements a biquad notch filter with the given parameters; the “-eli:9025:4,12000,0.71” correspondingly does the high shelf.

I started out building my filters using the many freely available ladspa plugins, but I couldn’t find any shelving filters and ended up writing my own.  In the end I coded four custom ladspa plugins (one each for the woofer, woofer-cut, mid and tweeter filter chains) that combine all the filters in one piece of code.  I did this mostly so I could keep 53 bits of precision all the way through.  More on this later.

Here’s a picture of the processing chain.  Each path represents a L/R pair of channels:

dsp block diagramEcasound and ladspa use floating point to represent audio samples internally, so digital clipping between filter stages isn’t a concern.  After applying the ladspa plugins and assigning the outputs to the appropriate channels, ecasound outputs a 6-channel raw PCM stream in 32-bit floating point. I pipe this to sox for requantization (with noise-shaped dither) to 16- or 24-bit, depending on the output hardware.  Sox sends its output directly to the hardware, so there’s no software mixer intervening at the operating system level to ruin the dither.

Many (most?) multimedia PCs already have a 6- or 8-channel soundcard; ordinarily it would be used for surround sound, but it works just fine for this application.  I use an external 8-channel USB audio interface ($30).  (Using multiple soundcards isn’t an option as their clocks won’t be sync’d.  Clock drift will wreck the synchronization between channels.  With a single multi-channel interface, all the DACs run off the same clock.)

Frequency Response

The main function of the ASP is to shape the frequency response at the analog outputs.  Getting this right is critical.  Digital biquads have slightly different frequency response shapes than their analog counterparts, so that using the filter parameters from the ASP block diagram will get you close, but with discrepancies of a couple dB in places.  So, using nonlinear least squares and the Nelder-Mead optimization code built into R, I optimized the biquad coefficients in the eq filters to get the closest possible fit to the measured ASP response.  Here are my (measured) dsp responses overlaid with Linkwitz’s measurements from the LX521 owner support page.  The dsp and ASP curves are pretty much indistinguishable:

(I’ve left out the numerical axis labels on purpose: these curves have been carefully engineered by Mr. Linkwitz, and I expect he considers them a proprietary aspect of the LX521 design.  He hasn’t made these data public, so I’m not about to.)

The residual plots (showing the difference between dsp and ASP outputs) are more meaningful:

The DSP output tracks the ASP within about 0.2dB in each pass-band.  The errors on the order of 1dB occur out-of-band where the response is already down about 40dB, so these are insignificant: a 2dB error at -24dB amounts to an error of only about 0.1dB in the summed response.

By the way, all the measurements here are taken at the digital output, at the point where the PCM stream goes to the soundcard hardware.  Ordinarily one would measure the analog output, but I didn’t want the soundcard’s contribution to cloud the dsp results.  If the soundcard adds too much noise or distortion, you can always get a better one.

Phase Response

Here are my group delay measurements overlaid with those for the ASP:

It’s hard to make direct comparisons here because Linkwitz’s data contain an unknown additional delay due to computer latency, which I haven’t figured out how to remove.  So I’ve just shifted the curves vertically to give the closest alignment.  Consequently the numerical values should only be interpreted in a relative sense.  In any case, it’s clear that the dsp and ASP introduce about the same phase distortion.

The ASP uses all-pass filters to phase-align the driver outputs in their crossover regions.  Instead, I’ve used pure delays (determined by acoustic measurements: indoor, gated; I really need to go outside and do this more carefully once winter lets up).  The all-pass approach essentially delays only frequencies below a given cutoff, whereas the digital delay applies to all frequencies equally.  I think this explains the discrepancies in the graphs above: my dsp introduces more delay at high frequencies than the ASP does.  Consequently the dsp has less overall group delay variation than the ASP, although I doubt this improvement is audible.

Distortion

Digital IIR filters introduce distortion through quantization (round-off) errors and limit-cycle oscillations.  This is a bigger problem for low-level signals, and especially for filters with low cutoff frequency.  There are ways to mitigate this; the most reliable is probably to work in higher precision, since these artifacts typically affect just a few of the least significant bits.

In ecasound and ladspa the default internal sample format is single-precision floating point (hence 24-bit precision, in the IEEE standard).  That might actually be enough, but not wanting to take any risks I coded my ladspa plugins to use double-precision (that’s 53 bits of resolution) for the sample data and filter coefficients.

When the ladspa plugins give the filtered signal back to ecasound, it gets truncated to single-precision.  If the sound card can’t take 24-bit samples (mine can’t) then we need to further requantize to 16-bit.  This will introduce quantization noise unless we add dither.  Unfortunately ecasound doesn’t dither, so I pipe the 24-bit float data through sox for re-quantization with noise-shaped dither en route to the sound card.  Sox implements several good dither algorithms.

I used a modulated sine wave as a test signal to measure distortion.  Linkwitz discusses the utility of such signals here.  (It took me a long while to realize the test signal needs to be generated with dither, or else it will contain quantization distortion products that will show up at the output no matter how well you design your filters.  May this save someone some time!)

Here is what comes out if you feed my woofer dsp with a low level (-78dBFS peak) modulated sine with 50Hz carrier frequency.  First without dither:

distortion (no dither)The quantization distortion is clearly visible, mostly as odd-order harmonics.  The peaks are below the least-significant bit for 16-bit audio (the horizontal line here references -96dBFS) but these things can be audible if you turn up the volume on a reverb tail.  Now with some noise-shaped dither after the filters:

distortion (dithered)This is much better: no quantization distortion, but at the cost of raising the noise floor.  The noise at high frequencies might look horrifying, but in fact it’s been carefully engineered (via noise-shaping) to concentrate the noise at frequencies where it’s least audible [1].

I ran this distortion test systematically over a range of frequencies and on all four dsp channels.  I couldn’t find anything worse than the results shown here.

[1] Lipshitz, Stanley P.; Vanderkooy, John; Wannamaker, Robert A.  “Minimally Audible Noise Shaping”, J. Audio Eng. Soc. 39, pp. 836-852, 1991.

Dynamic Range and Signal-to-Noise

Dipole speakers need equalization of +6dB/octave toward low frequencies, to compensate for destructive interference of the front and back waves.  Without the sub-cut switch engaged, the LX521 ASP adds a lot of gain at subsonic frequencies.  In a dsp implementation this gain can cause digital clipping, depending on the spectral content of the source material.

To prevent clipping, we need to reduce the overall gain, and this in turn compromises the usable dynamic range and signal-to-noise ratio.  The options are:

  1. Prevent clipping and sacrifice dynamic range.
  2. Allow some clipping.  It’s only on the woofer; it won’t affect the other channels and we might not notice.
  3. Do soft clipping with a limiter/compressor.  Again, this only affects one channel at a time, so might not be audible.

I’m going with 1, because in fact on most program material the peak level in the low bass is well below full range, so we don’t need to sacrifice much dynamic range to prevent clipping.

I’ve set my overall digital gains based on a statistical analysis of actual program material from my personal music library (an eclectic mix spanning many genres).  I have about 11,000 flac-encoded tracks, all of them lossless copies from a CD.  Of these, about 10,000 don’t have any clipped samples.  I took those 10,000 tracks, applied my dsp crossover/eq filters (without any additional gain) and measured the digital headroom on the output.

Shown below are histograms (corrected 7/7/2013) of the digital headroom for each dsp channel, when the overall gains are set to match those of the ASP.  Tracks to the left of 0 will have some digital clipping on the indicated channel if the upstream volume control is set at 100% (0dB).  Tracks to the right of 0 will have headroom to spare, but won’t make the best use of available dynamic range/SNR.  The red line indicates (somewhat arbitrarily) the additional gain/attenuation for which exactly 2% of tracks will have some clipping on the indicated channel:

The good news is that, on all but less than 2% of tracks, bass levels are low enough that digital clipping does not occur even at 100% volume; with the subsonic filter in place the woofer channel clips on only a very few tracks.  Only the midrange channel is a bit problematic, needing 2dB of attenuation so that clipping is eliminated on all but 2% of tracks, or 4dB to eliminate clipping entirely.  On the tweeter channel we can actually turn up the digital level by 5dB without ever clipping, then attenuate at the analog output, thereby increasing the signal-to-noise ratio.

Provided the upstream digital volume control is set below about -8dB, digital clipping will be absent on all output channels.

Level Matching and Volume Control

To get the greatest dynamic range and SNR, I’ve settled on adding digital gain on each channel so that each is just shy of clipping on some tracks, based on the statistical analysis above.  Fine ±1dB level-matching adjustments I’ll do in digital, with acoustic measurements.

My soundcard outputs 1.1Vrms max; my ATI 6012 amplifier clips at 1.0V.  So all channels need at least 20*log10(1.1)=0.8dB of analog attenuation.  This way the amplifier will never clip, but can still be driven to full power.

The volume control, too, really ought to be done after the analog output.  This is tricky.  I can’t find an affordable 6-channel passive attenuator that I like, nor am I sure I’m up to building one.

My typical listening levels span about 24dB.  Achieving this in the digital path means sacrificing up to 24 dB (4 bits) of dynamic range and SNR.  The output is properly dithered so this loss might actually be imperceptible.  Still, I’m considering a hybrid solution: a switched attenuator that adds 12dB of analog attenuation when desired.  That would divide my 24dB of listening range into two 12dB chunks, within each of which the level can be attenuated digitally without sacrificing more than 2 bits of resolution.

I haven’t actually done any of the analog stage yet.  So far I’m doing all the level-matching and volume control in digital.  I’ll fix this soon.  The system still sounds incredible.

Integration with MPD

I use mpd as my music server (until last year I used a squeezebox).  It’s insanely configurable, and I can control it via an iphone, android device, or any pc in my house.  Mpd itself is a server without a graphical interface, but there are many, many featureful graphical clients to choose from.  They can all control mpd over a network.

One great feature of mpd is its ability to pipe the audio output to an external program.  I’ve configured mpd to pipe its output through ecasound, which runs my software dsp and sends the resulting 6-channel audio to the soundcard.  The integration is seamless.  Here’s the section of my mpd.conf that accomplishes this:

audio_output {
        type    "pipe"
        name    "LX521+dither"
        format  "44100:32:2"
        mixer_type  "software"
        command "ecasound -q -b 256 -r:20 -z:nodb -z:mixmode,sum
                 -a:all -f:s32_le,2,44100,i -i:stdin -f:s32_le,6,44100 -o stdout
                 -a:woofer -pf:/etc/lx521woofer.ecp
                 -a:mid -pf:/etc/lx521mid.ecp
                 -a:tweeter -pf:/etc/lx521tweeter.ecp |
                 sox -q -c 6 -r 44100 -b 32 -e float -L -t raw -
                      -e signed -c 6 -b 16 -t alsa surround51:Device dither -s"
}

This tells mpd to output all data at 32-bit/44.1kHz and pipe this to ecasound.  (The “software” flag permits mpd to do digital volume control, which is in 32-bit.)  Ecasound in turn applies the xover/eq filters (defined in the named external files) and pipes the data (in 32-bit floating point) to sox, whose job is to re-quantize the data to 16-bit with dither, and pass it to the sound card.

CPU Load and Stability

I’m running my software dsp on a 6-year-old Pentium 4.  It runs in real time through mpd with no noticeable delays.  I’ve been listening for several weeks now and haven’t heard anything strange, not even a dropped sample, and the system has never crashed.  While playing audio, ecasound’s cpu usage runs at 4%.  Sox uses another 4% for dithering.

Listening Impressions

These are the best loudspeakers I’ve ever heard.  I have the Pluto 2.1 (with ASP) to compare to: the LX521 certainly imparts greater realism, but the difference isn’t staggering.  I feel this is a testament to the Pluto design, not a shortcoming of the LX521.  I find the greatest difference between the two is the greater precision and stability of stereo imaging with the LX521.  On well-made stereo recordings the loudspeakers utterly disappear from aural perception, leaving a convincing acoustic scene that’s actually disconcerting when I listen with eyes open.  On dual-mono recordings the center phantom image is so stable that some listeners have been convinced the sound was emanating from the amplifier!

Everyone will want to know, “does your dsp version sound as good as the ASP version?”  I don’t know.  I haven’t built the ASP and probably won’t, given how well the dsp has worked out.  But I’ve built a dsp version of Pluto 2.1, using the same process described here, and I’ve compared it to the original Pluto 2.1 in extensive side-by-side level-matched listening tests on program material and various test signals.  The only differences I can detect can be attributed to the relatively weak amplifier I used (actually just a surround receiver).  I just don’t have a 4-channel amp that’s comparable to that in the Pluto.  When I build a better amplifier for my dsp Plutos, I’ll know better.

Visual Impressions

Well, I think they’re handsome:

lx521 montage The little black box on the floor is my USB sound interface (plugged in to the PC off-screen).  A bunch of line-level interconnects run from this to the amp.  I know the placement right next to the guest bed is less than ideal … this is my work space, not my listening room.

Source Code

I don’t feel at liberty to share my full configuration, since it implements something that’s probably proprietary.  See this article for instructions on setting up a general software xover/eq using ecasound; LX521-specific configuration files are available to LX521 construction plan owners in the Orion/Pluto/LX521 forum.  Feel free to download and use the source code for my general-purpose filter plugins (high- and low-shelves, parametric eq, 2nd-order high- and low-pass, LR4 xover, and all-pass).

31 thoughts on “LX521 with software DSP

  1. Very interesting indeed.

    I am in the process of doing the same thing myself and have just purchased the LX521 plans. I expect delivery this week. I have posted a few tidbits under the username “kazam” over at the LX521 user forums.

    Please drop me a line!

    /M

  2. Nice work. Very inspiring and motivating. Not yet commited myself to building the LX521 but i’m attempting to implement a 2×8 MiniDsp x/o for my Orion system in place of the Asp. Pleasently surprised with the results so far.
    Please keep us posted with your work.
    J

  3. Richard,

    Thanks for sharing this. Agree with Magnus this is very interesting (and great work).
    I have purchased the LX521 plans and will initially go with the ASP design, but looking forward it would be great to have a “Linkwitz-approved ” 4-way LX521 DSP design. Seems you have a head start to take on the challenge posted by Siegfried on 3/23 🙂

    In the mean time, looking forward to the write-up you mentioned on the more generic DSP filter design.

    Mats (mats 31 on the ORION and LX521 forums)

  4. Richard,
    Fascinating. One concern I’ve had for years, since first building my Orions, was availability of components for analogue through hole components. As an electronics hobbyist if I tell you my first projects used Tubes and that I remember the days of only 3 transistors being available, red spot, green/yellow spot and white spot I give away my age.

    I can envisage you having a very complementary relationship with Siegfried to the benefit of those building his designs. I completed my LX521’s several months ago now and absolutely love them. My Orions are now gathering dust until I can persuade SHWMO to let me resurrect them in our living room for casual listening. I post on OPUG as Mike.

    I will be very interested in following your progress.
    Best Wishes
    Mike

  5. Very cool! I’ve been contemplating either building the LX521 or LXmini and I like your implementation as an all-in-one music playback/asp/EQ setup.

    One question I have — you talk about matching the response of your bi-quads to the published magnitude response using R. I wonder if that algorithm you used could be applied more generally to room-mode equality, or matching a response to a house curve.

    Would you be willing to share how you did this in R?

    • I use R’s built-in function “optim” to do the heavy lifting, i.e. to minimize the least-squares error between the target frequency response curve and the curve that results from my dsp filter chain. It takes a while (6+ hrs) to run but the final fit is surprisingly good. I’ve done the same to design eq filters for other speakers with good results. There are fancier approaches, but this works.

      In principle you could modify the target curve to anything you want, e.g. to deal with room modes. But I wouldn’t bother: room mode equalization is a black art and probably best not to get too fussy. I would just do an in-room measurement to identify the worst peaks in the response, and clobber them by putting appropriate notch filters in the “pre1” filter chain. So e.g. you could add “-el:RTpara,-12,80,5.0” to knock out a 12dB peak at 80Hz with Q=5.0.

      Room modes were the original reason I got into DSP, but actually my LX521 placement doesn’t activate any troublesome modes so I haven’t bothered.

  6. Hi Richard,

    Once again I am eternally grateful for all your hard work. It’s enabled me to set up some very impressive speakers – the like of which I could not have managed without you!

    I take on board your comments about 6 gang pots for volume control. That said, I decided to go with an affordable offering from Alps. I’ve noted the current progress on my ‘Ashby Open Baffle’ in my webpage and would welcome any comment you may have. http://dandini.wordpress.com/2014/08/27/ashby-open-baffle-progress/

  7. great work,
    is it possible to help me with alsa problems, i always get a bufferunderrun (because alsa has only 64k buffer.)
    I have found no solution on the net.
    normaly you can increase the buffer in /etc/asound.conf, but i do not know the correct phrase in that file with the parameters of your example.
    please can you help me.
    is it also possible to send me the files for the 3way crossover.
    thanks

    wandancer

    • You might try increasing the size of ecasound’s buffer, e.g. by putting “-b:1024” (or more) in the command line.

      The files for the 3-way LX521 crossover are proprietary; for that reason I’ve put them in the Owners’ forum in the official LX521 users’ group.

  8. Thank you very much for sharing this work! Is there any chance that, with SL permission, you could also offer Lxmini config files to us humble followers? I am sorry, I do not even know how much work would be involved in you doing that – so I apologise for even asking. I read your 521 post in the OPLUG and understand that it is not a simple copying of parameters…

    • I haven’t built the LXmini yet. If I ever do (I have no plans to) then I will post the configuration in the official LXmini OPLUG forum.

      But since the LXmini has an “official” minidsp implementation, I suspect you really can just copy the parameters. The biquad digital filters in my software implementation are identical to those implemented in the minidsp, so they should take the same parameters.

  9. Dear Richard,

    This is a very inspiring work. I read your different articles on the topic, ans may follow the path for a LXmini job to come. I have a question and would be happy to have your idea on the below topic.

    In your article, you state that: ” Many (most?) multimedia PCs already have a 6- or 8-channel soundcard; ordinarily it would be used for surround sound, but it works just fine for this application. I use an external 8-channel USB audio interface ($30). (Using multiple soundcards isn’t an option as their clocks won’t be sync’d. Clock drift will wreck the synchronization between channels. With a single multi-channel interface, all the DACs run off the same clock.)”.

    For my project I would like to implement a full digital approach by using 2 stereo USB Full Digital Amps (Ex FX-Audio D802): one amp for the left speaker (one channel tweeter + one channel for Woofer), and the other amp for the right speaker. Amps to be located near the speakers.

    In my understandfing, the two drivers of the same speaker would rely on the same clock, which seems to be the more important. Clock synchro between the two speakers could be lees critical.

    Does this make sense? Could the syncho between L and R speakers be more critical than I think ?

    • I think you can probably get away with this. Stereo imaging is sensitive to time- and phase-shifts, so it all depends how much clock drift you get. If the drift adds up to several ms while playing a long track you’ll have problems, but I think this is very unlikely to happen. I think Charlie Laub built a system with one Raspberry Pi performing dsp for each channel, and had success. You might want to ask him if he had problems with clock drift.

      • Thank you Richard for the clue. I just looked at Charlie Laub’s speach at 2015 Burning Amp speaker.

        He ssems pretty happy with 2 RPi and using a NTP daemon. It seems to remave the sync roadblock. Fine !

        I’m happy to discover new thinks thanks to you.

        Best regards, JM

        • NTP will only sync the date and time, it will not sync the system clock, nor will it sync the clock of the soundcard should your soundcard have it’s own.

          The NTP sync is still very valuable to get the date/time of your computers in sync, you just have a bit more work to do to actually get your audio in sync.

          You can use the ALSA API to get timestamp information out of the soundcard. It’ll tell you the timestamp of when the info was taken and how much data is waiting in the buffer. From these two pieces of info you can determine exactly where the soundcard is at. It’s basically the same for capture and playback. Some info here: https://www.kernel.org/doc/html/latest/sound/designs/timestamping.html

          Here’s the gist of an example where you capture analog in on boxCapture and send to boxLeft and box Right for playback. In boxCapture you’d get the data one period at a time from ALSA and the timestamp when it was captured. You’d prefix the period of audio with the timestamp it was captured then send it off to boxLeft and boxRight, perhaps over UDP.

          Once received in boxLeft you get info of where the timestamp and delay of where the soundcard is at. Combining these values gives you the timestamp of when the next period of audio stuffed into the buffer will be played. Compare that to the timestamp of the next period of audio you receive from boxCapture, then resample, drop, or stuff samples to bring the two timestamps in sync. Get that working, then dump the same code into boxRight, and your left and right channels will play in sync.

          I’m still working on my code, but it works good enough to not hear shifts in phase in a pure sine wave until over 7khz. I can’t hear it at all in normal audio. When using similar soundcards it’ll get buy dropping or stuffing a single sample a minute.

          Where you capture audio from a pipe instead of from a hardware soundcard I’ve had good luck creating a timestamp by scaling the period time by the NTP value you can get at /var/lib/ntp/ntp.drift, keeping a running sum of that, and finally adding the timestamp when you first started capturing.

          I haven’t figured a way to get the timestamp info out of alsa without implementing the ALSA API in C to send thh actual audio. Aplay, ecasound alsahw output, etc won’t get it done.

          I’m here trying to understand how to implement digital filters to combine a 7.1 receiver, some old JBL 100s, and an RPI to create an active setup that works with my existing multiroom audio code. Wish me luck!

  10. Hi Richard,
    Very nice job indeed, congratulations.
    I have just completed LX 521.3 speakers, so keeping the passive medium XO, and want to have the 3-way digital filtering runnning on a PC, setup through Acourate software. I have been reading through your posts on the Oplug forum with interest.
    – If I understand well, you do also keep the medium passive XO and it’s upper medium connected in reverse polarity ?
    – how do you manage tweeters polarity ? with delay ? (the LX521.4 DSP XO uses a polarity change)
    – You say here your files for the 3-way crossover can be found on the Oplug forum, but I couldn’t find them… (maybe I’ve been bad for that !)

    Thank you !

    Francois

    • Thanks!
      I did keep the passive xo, with the same polarities as the original lx521. For each digital crossover I aligned phases with a small delay. Delay values can be found in the files on the Oplug forum.

      • Iam trying to make new project. Something simillar like LX521. I just wonder how the lower and upper midrange is connected in LX521 with passive crossover. Both are with the same polarity or the upper one is connected with reverse polarity. …I just wonder, because i fund some informations that the best for upenbafle speakers is the same polarity for all drivers in the system.

  11. Richard, I have been using a system inspired on yours for a few months, initially with a Raspberry Pi piping out multi-channel out via HDMI, now with a Raspberry Pi 2.

    In your diagram above it looks as if digital processing ACROSS ladspa plugins happens in 64-bits, however each ladspa filter is then converted to 32-bit floats; so 64-bit processing happens inside your ladspa filters but results get converted to 32-bit for each filter step, so the end-to-end processing does not happen in 64-bit doubles, even before final 16-bit or 24-bit quantization.

    I’ve not yet measured whether this is audible or not, but I’ve modified ecasound to do all processing in 64-bit and interface with (non-standard) ladspa filters in 64-bit; if you are interested:

    – Simply compile ecasound above
    – Modify LADSPA_Data to double in the global ladspa.h file, and recompile the needed ladspa plugins (yours work fine, same as the delay one in CMT). You may need to disable some CMT plugins which do not compile after modifying LADSPA_Data.

    Best – Andres

    • Hi Andres,

      You’re absolutely right: between filter blocks the audio data gets truncated to 32-bit floats (i.e. 24-bit resolution, since 8 bits are reserved for the mantissa). While it’s possible to keep 64-bit floats all the way through as you have, the quantization distortion from truncating to 24-bit is around -144dB. So it’s a non-issue.

      It is probably important to use double floats for the coefficients and accumulators within each plugin (as mine do by default) so the biquad filters don’t generate audible limit cycles. But converting to 32-bit float at each plugin output should be fine.

  12. Hi Richard
    Many thanks for writing this meaningful article; I’m for now experimenting some software dsp/xover solutions, 4 ways oriented; therefore I’m looking for an inexpensive 8 channel soundboard/DAC; somewhere above you wrote ” I use an external 8-channel USB audio interface ($30).” Could you tell us more about this item and where you got it from ?
    Thanks again, best regards
    JPierre

  13. Richard thanks for a very interesting article. I am thinking about my options for driving a pair of lxminis.
    What I don’t understand is how a cheap 8 channel audio interface can possibly have good enough DACs to ensure the quality of the signal going into your amplifiers. What am I missing?
    Thaks
    John

    • Hi John,
      “Good enough” DACs are basically a solved problem of electrical engineering. My considered opinion is that all the fuss about high-end DACs is overblown, by and for the sole advantage of the audiophile sector of the consumer electronics market. Even at the level of the cheap, mass-produced DACs like the ones I use, a measurable improvement in SNR or distortion does not necessarily translate to an audible improvement. For a long time I found such a statement hard to believe, so I understand your skepticism. But having done some careful level-matched A/B/X testing of several DACs (some in the $1000+ range) I am now utterly convinced. I highly recommend doing this testing for yourself. In any case, even if there are quality problems with cheap DACs, they are bound to be at least an order of magnitude smaller than the response errors (both linear and nonlinear) inherent in the mechanical moving parts of loudspeaker drivers and their acoustical interaction with any baffle structure. That’s where the real engineering challenge is, and where to focus most effort and expense.

Leave a Reply

Your email address will not be published. Required fields are marked *