In my last post, I wrapped Steve's tremolo effect up in the little language we've been working on, which I named Bell. By the way, here is the source code to the Bell parser and compiler: https://github.com/blanu/Bell
Right now, we can use Bell to incorporate Steve's custom tremolo effect into an effect chain, just the same as with the Audio library's built-in effects. My eventual goal is to enhance the Bell language to the point that it can be used to implement such custom effects as well. However, there is quite a bit of work to get there.
In the meantime, I have made some enhancements to the language that will move us on down the road to that goal. Here is a list of changes in the latest version:
- Modular microformats - I felt like the parser was getting too complicated as I was trying to parse out several different microformats. There is one syntax for enumerating module instances, another for effect flows, and a third for event handlers. In order to make this more modular and extensible, I have added a keyword for each syntactical microformat. So each line now begins with a keyword that says which format the line is in. This should make it easy to add additional syntax in the future.
- Functions - Similar to event handlers, functions are named code blocks. Unlike event handlers, they are called from other code blocks and not by the system.
- Self keyword - This is to keep the syntax regular. In the case where you are calling a function rather than a module instance, we still need a name for the subject, so you can use self for this situation. This feature will make more sense with code examples, I hope.
- User-defined objects - Event handlers and functions can now be grouped together into user-defined objects. For each user-defined object, the compiler will generate a C++ class, as well as the logic to call the object's event handlers when there is a system event.
- Capabilities - Module instances are a type of capability, which is to say a key that unlocks the ability to invoke a specific effect. Previously, module instances were global to the whole program. Now they are passed to objects, event handlers, and functions with the "uses" keyword. An event handler or function can only get capabilities from its containing object. No capability means that the compiler won't generate the necessary code to invoke the associated effect, avoiding unwanted side effects.
Here is some example Bell code:
instance codec : AudioControlSGTL5000
instance input : AudioInputI2S
instance tremolo : AudioEffectTremolo
instance mixer : AudioMixer4
instance output : AudioOutputI2S
flow input -> tremolo -> mixer -> output
flow input -> mixer
object main uses codec mixer tremolo
event main setup uses codec mixer tremolo : self setupCodec . self setupMixer . self setupTremolo
function main setupCodec uses codec : codec enable ; inputSelect 0 ; micGain 0 ; lineInLevel 5 5 ; lineOutLevel 20
function main setupMixer uses mixer : mixer gain 0 0.3 ; gain 1 0.7
function main setupTremolo uses tremolo : tremolo begin 100
Noticed how each line now begins with a keyword telling you what format is used for the line. Also, object definitions are line-oriented, not nested. Each event and function therefore needs to include the name of the object to which it belongs. Object "main" has an event handler "setup" that has the "codec", "mixer", and "tremolo" capabilities. It uses "self" to call functions on the main object, such as "self setupCodec". Functions allow us to break up our code into smaller pieces. Please also notice how the "setupCodec" function has the "codec" capability, allowing it to make calls to the "codec" effect. Without that capability, it would be unable to access the codec. This way, you always know what your code is doing in terms of effects.
I should mention that the "uses" keyword introduces a very simple type system. Steve is very resistant to type systems, with good reason, so this is done sparingly and with solid motivation. We need to know the type of each function (where the type is currently just defined as which capabilities it uses) so that the compiler and give each function the right references to the module instances. We could have used type inference instead of explicit types, but this would only change things syntactically; we'd still be adding a type system. I didn't have time to write type inference, so explicit typing was faster. Ultimately, we will probably need to add a bit more typing later, again for reasons of helping the compiler. Since the compilation target language is C++, we will need to know what types C++ needs in order to compile the code.
Let's look at the output of the code generator.
First, the .ino file. I did have to edit this one slightly to rename "main" because that is a reserved name. So just told call your object "main".
#include <Arduino.h>
#include "Audio.h"
#include "main.hpp"
#include "AudioControlSGTL5000Module.h"
#include "AudioControlSGTL5000Universe.h"
#include "AudioInputI2SModule.h"
#include "AudioInputI2SUniverse.h"
#include "AudioEffectTremoloModule.h"
#include "AudioEffectTremoloUniverse.h"
#include "AudioMixer4Module.h"
#include "AudioMixer4Universe.h"
#include "AudioOutputI2SModule.h"
#include "AudioOutputI2SUniverse.h"
AudioControlSGTL5000 codec;
AudioControlSGTL5000Module codecModule(&codec);
AudioControlSGTL5000Universe codecUniverse(&codecModule);
AudioInputI2S input;
AudioInputI2SModule inputModule(&input);
AudioInputI2SUniverse inputUniverse(&inputModule);
AudioEffectTremolo tremolo;
AudioEffectTremoloModule tremoloModule(&tremolo);
AudioEffectTremoloUniverse tremoloUniverse(&tremoloModule);
AudioMixer4 mixer;
AudioMixer4Module mixerModule(&mixer);
AudioMixer4Universe mixerUniverse(&mixerModule);
AudioOutputI2S output;
AudioOutputI2SModule outputModule(&output);
AudioOutputI2SUniverse outputUniverse(&outputModule);
Main mainInstance(&codecUniverse, &mixerUniverse, &tremoloUniverse);
AudioConnection connection0_0a(input, 0, tremolo, 0);
AudioConnection connection0_0b(input, 1, tremolo, 1);
AudioConnection connection0_1a(tremolo, 0, mixer, 0);
AudioConnection connection0_1b(tremolo, 1, mixer, 1);
AudioConnection connection0_2a(mixer, 0, output, 0);
AudioConnection connection0_2b(mixer, 1, output, 1);
AudioConnection connection1_0a(input, 0, mixer, 0);
AudioConnection connection1_0b(input, 1, mixer, 1);
void setup()
{
mainInstance.setup();
}
void loop()
{
}
We now have this Main class, which is our user-defined object. The compiler puts it in its own files (.hpp and .cpp) and also generates the code to call its events. Since the object has a setup event, the .ino file calls it from the Arduino setup() function, pass in the capabilities it needs.
Here is the header file for our Main class:
#ifndef _MAIN_H_
#define _MAIN_H_
#include <Arduino.h>
#include "Audio.h"
#include "AudioControlSGTL5000Universe.h"
#include "AudioMixer4Universe.h"
#include "AudioEffectTremoloUniverse.h"
class Main
{
public:
Main(AudioControlSGTL5000Universe *codec, AudioMixer4Universe *mixer, AudioEffectTremoloUniverse *tremolo) : codec(codec), mixer(mixer), tremolo(tremolo) {}
void setup();
private:
void setupCodec();
void setupMixer();
void setupTremolo();
AudioControlSGTL5000Universe *codec;
AudioMixer4Universe *mixer;
AudioEffectTremoloUniverse *tremolo;
};
#endif
Here here is the .cpp file:
#include "main.hpp"
void Main::setup()
{
setupCodec();
setupMixer();
setupTremolo();
}
void Main::setupCodec()
{
codec->enable();
codec->inputSelect(0);
codec->micGain(0);
codec->lineInLevel(5, 5);
codec->lineOutLevel(20);
}
void Main::setupMixer()
{
mixer->gain(0, 0.3);
mixer->gain(1, 0.7);
}
void Main::setupTremolo()
{
tremolo->begin(100);
}
So we added several new features to the language and so far we're still keeping the generated C++ code looking pretty readable. Our modular parser will hopefully be extensible enough for our future needs as our language evolves. As we're building out the syntax of the language, I'm also converting more parts of the Audio library into modules and checking that code generation works as expected as we expand the codebase to which we apply it. Now that we can generate user-defined C++ classes in addition to the .ino project files we've been making so far, we are getting closer to being able to write our own effects in this burgeoning Bell language. In the next post we'll go back to looking at the C++ code and the virtual machine that supports its execution at runtime.