Hi I have found this audio-pde at http://www.ee.columbia.edu/~dpwe/resources/Processing/.
As I have no clue on how to learn about audio-filters elsewhere, I am trying to learn from that code, by editing it, and merging it in to something more useful.
Processing.org has this example on how to open a file for 'processing it' : http://www.processing.org/reference/selectInput_.html
As I try to merge the two (as followed):
import ddf.minim.analysis.*;
import ddf.minim.*;
Minim minim;
AudioPlayer sound;
FFT fft;
Filter filter;
boolean paused = false;
// spectrum display configuration
int buffer_size = 2048; // also sets FFT size (frequency resolution)
float gain = 40; // in dB
float dB_scale = 2.0; // pixels per dB
int spectrum_height = 200; // determines range of dB shown
int legend_height = 20;
int spectrum_width = 512; // determines how much of spectrum we see
int legend_width = 50;
// filter properties
float centerFreq = 1000;
float bandwidth = 500;
String fselR = "";
void setup()
{
try{
selectInput("Select a file to process:");
}
catch(Exception e){
}
// size(legend_width+spectrum_width, spectrum_height+legend_height, P2D);
// size has to be explicit for the applet export to extract it correctly
size(562, 220, P2D);
//textMode(SCREEN);
textFont(createFont("SanSerif", 12));
minim = new Minim(this);
sound = minim.loadFile(fselR, buffer_size);
// sound = minim.loadFile("sound.mp3", buffer_size);
// make it repeat
sound.loop();
// create an FFT object that has a time-domain buffer
// the same size as line-in's sample buffer
fft = new FFT(sound.bufferSize(), sound.sampleRate());
// Tapered window important for log-domain display
fft.window(FFT.HAMMING);
// create a filter and insert it
filter = new Filter(centerFreq, bandwidth, sound.sampleRate());
sound.addEffect(filter);
}
// Convert a spectrum value into a y pixel position
int spectrum_y(float bandval)
{
float val;
if (bandval > 0) {
val = dB_scale*(20*((float)Math.log10(bandval)) + gain);
} else {
val = -200; // avoid log(0)
}
int y = spectrum_height - Math.round(val);
if (y > spectrum_height) { y = spectrum_height; }
return y;
}
String fileSelected(File selection, String fses) {
try{
if (selection == null) {
println("Window was closed or the user hit cancel.");
} else {
println("User selected " + selection.getAbsolutePath());
fselR = selection.getAbsolutePath();
}
}
catch(Exception e){
fselR = "";
}
return fselR;
}
void draw()
{
etc...
- - -
(the italic lines: is my own merging code)
The compiler complains about "GLExeption...: cannot call invokeAndWait..." as if the main audio-example won't permit me to have a pause in execution, where selected input of a pickable *.mp3 is put into the system as a string varible (to be used at: sound = minim.loadFile(fselR, buffer_size);)
As I have no clue on how to learn about audio-filters elsewhere, I am trying to learn from that code, by editing it, and merging it in to something more useful.
Processing.org has this example on how to open a file for 'processing it' : http://www.processing.org/reference/selectInput_.html
As I try to merge the two (as followed):
import ddf.minim.analysis.*;
import ddf.minim.*;
Minim minim;
AudioPlayer sound;
FFT fft;
Filter filter;
boolean paused = false;
// spectrum display configuration
int buffer_size = 2048; // also sets FFT size (frequency resolution)
float gain = 40; // in dB
float dB_scale = 2.0; // pixels per dB
int spectrum_height = 200; // determines range of dB shown
int legend_height = 20;
int spectrum_width = 512; // determines how much of spectrum we see
int legend_width = 50;
// filter properties
float centerFreq = 1000;
float bandwidth = 500;
String fselR = "";
void setup()
{
try{
selectInput("Select a file to process:");
}
catch(Exception e){
}
// size(legend_width+spectrum_width, spectrum_height+legend_height, P2D);
// size has to be explicit for the applet export to extract it correctly
size(562, 220, P2D);
//textMode(SCREEN);
textFont(createFont("SanSerif", 12));
minim = new Minim(this);
sound = minim.loadFile(fselR, buffer_size);
// sound = minim.loadFile("sound.mp3", buffer_size);
// make it repeat
sound.loop();
// create an FFT object that has a time-domain buffer
// the same size as line-in's sample buffer
fft = new FFT(sound.bufferSize(), sound.sampleRate());
// Tapered window important for log-domain display
fft.window(FFT.HAMMING);
// create a filter and insert it
filter = new Filter(centerFreq, bandwidth, sound.sampleRate());
sound.addEffect(filter);
}
// Convert a spectrum value into a y pixel position
int spectrum_y(float bandval)
{
float val;
if (bandval > 0) {
val = dB_scale*(20*((float)Math.log10(bandval)) + gain);
} else {
val = -200; // avoid log(0)
}
int y = spectrum_height - Math.round(val);
if (y > spectrum_height) { y = spectrum_height; }
return y;
}
String fileSelected(File selection, String fses) {
try{
if (selection == null) {
println("Window was closed or the user hit cancel.");
} else {
println("User selected " + selection.getAbsolutePath());
fselR = selection.getAbsolutePath();
}
}
catch(Exception e){
fselR = "";
}
return fselR;
}
void draw()
{
etc...
- - -
(the italic lines: is my own merging code)
The compiler complains about "GLExeption...: cannot call invokeAndWait..." as if the main audio-example won't permit me to have a pause in execution, where selected input of a pickable *.mp3 is put into the system as a string varible (to be used at: sound = minim.loadFile(fselR, buffer_size);)