this is microStrat.

Embedded Systems | C | Music | Signal Processing

a guitar where all signals are internally generated by a microcontroller, allowing for limitless creative possibilities and unique sound creation through computational logic, signal processing, and touch-sensitive sensors.


Overview

An interesting name, isn’t it? As one who is quite interested in the art of music and the creation of instruments, it was natural to make an embedded system that assumes all aspects of being a musical device. An electric guitar isn’t quite like the electric-electric guitar with one differing aspect: the creation of sound. In an electric guitar, one plucks a metal string to vibrate at a specific frequency, to change the magnetic field of the pickup component, which effectively induces an electromotive force (EMF) creating an electrical signal to be amplified. In essence, this process is more so a mechanical conversion to electrical energy by electromagnetic induction. In the electric-electric guitar, all signals are created internally by the microcontroller of the embedded system. Using embedded C, an electric signal is created computationally through logic and signal processing to then be transduced into an acoustic signal with an external speaker. The guitar mechanical strings are replaced by sensors to actuate a sound wave at a particular frequency by a touch event. Such an instrument allows for the extension of creativity with what one can do and produce whilst playing this guitar, removing the limiting element of mechanical sound creation to digitally being able to create unique sounds and process the signal directly to apply cool effects for a different auditory experience.

Media


Description

  1. Microcontroller

    • ADC: The MCU utilizes an Analog-to-Digital Converter (ADC) to sample analog signals from sensors, such as the touch sensors in place of mechanical strings, or a potentiometer to adjust and control the level of a given audio parameter.
    • GPIOs for Applied Effects: General-Purpose Input/Output (GPIO) pins on the MCU are utilized to interface with external components, such as capacitive buttons, allowing users to apply various effects to the sound output in real-time.
    • Timers and Interrupts: The MCU's timers and interrupts are employed to accurately time events, such as triggering sound generation or applying effects. They ensure precise synchronization and efficient execution of tasks. This is critical to mitigate the arpeggio effect that comes with the flow of instructions, in order to have the combination of sound waves (playing multiple notes), rather than only being able to play one sound at a time.
    • Callback Functions for Sound Creation and Mixing: The system employs callback functions, which are triggered by the specific touch events, to generate and mix sounds. These functions utilize the mathematical algorithms and synthesis techniques to create the desired audio output.
    • PWM for Sound: Pulse Width Modulation (PWM) signals are generated by the MCU and used to actuate the speaker. By modulating the width of the pulses, the MCU controls the amplitude and frequency of the acoustic signals produced, allowing for precise sound reproduction.
  2. Signal Generation

    • Creation of Base Sine Wave: The system generates a base sine wave, using an algorithm involving the Fourier Series, which provides the fundamental frequency component for the audio synthesis process.
    • Audio Synthesis and Sampling: The base sine wave is synthesized by combining harmonics and adjusting their amplitudes and phases. This synthesized audio waveform is then sampled at a specific sampling frequency, using the Pulse Code Modulation (PCM) technique, and stored in a buffer array in memory. Techniques like wave shaping, frequency modulation, or additive synthesis were employed to achieve a wide range of sound variations.
    • Storage and Memory Management: The buffer array holds the sampled audio waveform and allows for efficient storage and retrieval of audio data during real-time processing.
  3. Signal Processing

    • FFT: Fast Fourier Transform is utilized to analyze the frequency content of the audio waveform. It decomposes the waveform into its constituent frequency components, enabling the system to identify specific frequencies for further processing or manipulation.
    • Convolution: Convolution is used to apply audio effects by convolving the sampled audio waveform with impulse responses representing the desired effect. This process modifies the frequency and temporal characteristics of the sound, resulting in the desired audio effect.
    • Nonlinear Amplification: Nonlinear amplification techniques are used to add character and dynamics to the sound output. To alter the amplitude of the audio waveform in a nonlinear fashion, creating effects such as distortion or overdrive.
    • Real-time Processing: The signal processing operations are performed in real-time, ensuring minimal latency between input and output. The efficient utilization of computational resources and optimized algorithms allow for smooth and responsive application of audio effects. The system manipulates the stored buffer array to add general audio effects such as delay, reverb, distortion, and more, providing a diverse range of sound manipulations.