David Thiel's discusion on the creation of the Reactor sounds:

"Reactor was done with software synthesis.  The 'DSP' was a 1 MHz 6502 that ran algorithms that created a sound data stream that was slammed into an 8 bit DAC as fast as possible.  There was no hardware timebase so sampling frequency was a function of the complexity of the algorithm that was making the sound data. If a loop had a branch in it you had to take care that all the possible paths would execute the same number of cycles (else your sampling frequency would change (and your pitch would vary).

The trick in making something that resembled a distorted lead instrument with such sparse resources was the waveform that I used for the lead instrument was not in ROM.  I pointed the synthesizer algorithm at the 128 byte of RAM that was my temporary variables, global variables and stack and used that as my waveform.  I was playing my variables as an instrument.  Since as a matter of course they were changing (in order to run the algorithm) the instrument sample were dynamic (and related) to whatever note I was trying to play.

The rest of the sound effects were algorithms too.  The curious thing about games of that period from the American pinball companies (Williams, Gottlieb), they both used the same technique, 8-bit processor synthesis.  Even though some sound events had as many as four components, all sound events were mutually exclusive.  Imagine, in Robotron for instance, there is never more than one sound event at a time!.  And it sounds so rich and dense.  If a new sound truncates a sound that is playing, our mind goes, oohh, this new sound is loud ( not 'wait a minute, what happened to the sound that was playing).  So much for polyphony."