In the process I have discovered that S3D does NOT work well with large files. One of my images resulted in a 100MB (ASCII) STL file. It wasn't too bad until the slicer started working on it. . . then it blew up to over 8GB and nearly hosed my computer. What in the heck can its internal representation be that it runs into the gigabytes?
It could be plausible. If S3D keeps the geometry calculation for each slice, then with a lithophane type model, each slice might have a lot of vertices and small edges to store. But then I'm guessing it has to find all the vertices that are coincident with each other on each layer, since the STL file just lists each triangle separately and doesn't tell you which triangles share which vertices and edges. To write efficient g-code I imagine is it has to figure all that out for each layer.
Matching points can be done in linear (or near linear) time, but it may require lots of memory depending on the algorithm used. And at the same time, it has to do some layer-to-layer comparisons, which may also add to the memory burden if there's a tradeoff to be made between cpu time and memory useage.
There's no programming magic to it, but given that S3D is known for speed, it seems reasonable that memory usage is the compromise it makes. Plus of course is the likelihood of memory leaks in a program like this.