I don`t know exactly what track.left.get(i) and track.right.get(i) are doing, but what I'm sure is that they don't always get the same result when you run the application multiple times.
Actually, if you get a short sound clip (just a few seconds) and let it loop printing the values to the terminal it will show a sequence that you could easily recognize, but actually it's not the same thing every time; sometimes there is a visual pattern (if you make a quick graphic out of the values) although the values doesn't match exactly, but at some level is pseudo-random stuff.
I was trying to generate an image from a sound file, but if gathering the data is not accurate from the beginning (run two times the sketch an get the same output) it actually doesn't work as a visualization tool, is more kind of 'sound based generation'.
Somebody has some ideas how to solve this?
Not playing the file at the same time you are analysing it helps a bit (in terms of repeatability). I also was playing with the buffer size but it doesn't help much.
Finally the best approach I found, was to run the analysis a few times and make an average, so you clean out a bit the 'random noise'... but of course this is a workaround, nothing to build upon.
As I see it, you should be able to get the data from a sound file as well as you get it from a text file.
Would be nice to hear if somebody got better results with repeatability in audio analysis with this or other library for Processing.