What are some best practices for editing and processing MIDI and audio data? (original) (raw)
Last updated on Sep 10, 2024
Powered by AI and the LinkedIn community
MIDI and audio data are two common formats for recording and producing music, but they have different characteristics and require different editing and processing techniques. In this article, you will learn some best practices for working with MIDI and audio data in audio engineering, such as how to optimize performance, avoid errors, enhance quality, and achieve creative effects.
Top experts in this article
Selected by the community from 11 contributions. Learn more
MIDI basics
MIDI stands for Musical Instrument Digital Interface, and it is a protocol that allows electronic devices to communicate musical information, such as notes, pitch, velocity, and control messages. MIDI data does not contain any sound, but rather instructions for how to play a sound. MIDI data can be recorded, edited, and processed using software instruments, such as synthesizers, samplers, and drum machines, or hardware devices, such as keyboards, controllers, and modules.
- ♦ La compresión del MIDI, más allá de conocer el termino, tiene que ver con conocer una estrategia básica pero fundamental para la implementación de diversas etapas de la producción musical y automatización. ♦ Es un protocolo de comunicación en red (asimétrica y unidireccional en su versión 1.0) ♦ Para ponerlo de manera elemental, por la red MIDI viajan solamente ORDENES, que cobran alguna significación cuando son recibidas por un periférico receptor conectado apropiadamente dentro de la red. ♦ El uso del MIDI no debe asociarse al mundo del audio. Sino a las posibilidades de control que ofrezca el periférico receptor. ♦ La mala fama del MIDI como algo que suena mecánico o de mala calidad, ES UN ERROR, por puro desconocimiento.
Audio basics
Audio data, on the other hand, is a representation of sound waves in digital form, such as WAV, MP3, or AIFF files. Audio data can be recorded, edited, and processed using microphones, audio interfaces, mixers, and audio editing software. Audio data contains the actual sound of the source, such as a voice, a guitar, or a piano, and it can be manipulated in various ways, such as changing the volume, pitch, tone, or effects.
An analog audio signal is the electrical signal generated by a device such as a microphone. This electrical signal can be represented digitally by sampling it thousands of times per second at a given sampling rate. Converting an analog audio signal to a digital one is done by a component called an analog to digital converter (ADC), with the opposite process being done by a digital to analog converter (DAC). Audio interfaces are devices which typically contain both ADC’s and DAC’s allowing users to digitally record and play back analog signals. Once recorded, it’s possible to manipulate the digital audio data, changing how it sounds before being converted back to analog. This opens many exciting possibilities for media production.
MIDI definitely uses more CPU, but having the kind of control to go back and change notes is worth it. Plus if you print the audio file instead of using a bunch of software instruments, you’ll only have one midi file instead of a bunch.
MIDI is all about non-destructive editing and flexibility to modify notes, duration, pitch velocity, timbre, and events. MIDI also lets you choose your instruments or adjust your score quickly for an instrument. Maybe you wrote a piano part and want to change it to a string part. Midi also offers a wide range of integration capabilities. System exclusive, midi offers several system-exclusive events that can used to program musical instruments and devices. MIDI data is significantly smaller than audio data. Virtual instruments use MIDI in combination with audio samples, which can consume CPU and memory. To address this, you can "freeze" the track (a Pro Tools term) that saves CPU/memory. MIDI changed how I composed music 35 years ago.
While it's a known fact that MIDI uses more CPU and computing power than Audio Playback, we have various tools showing us the ability to deconstruct audio into its characteristics, techniques like "De-reverb", "Stem separation" and many many others are unfolding insane possibilities of signal retention to audio that was only possible through MIDI a couple of years ago.
♦ La dicotomía entre MIDI vs AUDIO es absolutamente ridícula. Son dos herramientas diferentes y hablar de ellas en el mismo punto constituye un error fundamental. ♦ No caigas en esa trampa de pensar que una tecnología sustituye a la otra. Sería como pensar una falsa dicotomía entre un monitor led y un disco rígido. ♦ No se puede desarrollar una carrera eficiente si no se comprenden profundamente estas diferencias. ♦ El error es el desconocimiento de las tecnologías. La discusión sobre tamaño de los archivos, o edición destructiva vs no destructiva, solo se basa en los primeros momentos de ambas tecnologías (durante los años noventa) ♦ Audio y MIDI, son tecnologías que se integran en diversos ecosistemas. Y ninguna reemplaza a la otra.
Currently, most MIDI hardware and software are still using the MIDI 1.0 protocol which only provides 128 different values for control. A value of 0 means there is no data (e.g. no velocity) and a value of 127 is the top of the range (e.g. the highest velocity). When recording automation to parameters such as pitch bend, velocity, etc., the result will look like a series of steps. Meaning the resolution isn't very smooth. In the new MIDI 2.0 protocol that was recently released, the resolution of MIDI has increased to the point that it is very similar to the resolution of audio data. This may eliminate the need to convert MIDI to audio in certain situations and will open up new possibilities for MIDI processing.
♦ Editar datos MIDI es un proceso simple que no requiere ninguna demanda significativa de CPU o de Memoria. ♦ Sin embargo, es necesario un conocimiento suficiente de los datos MIDI involucrados. No tiene sentido afectar datos MIDI si no sabemos cómo van a impactar. ♦ Por eso es pertinente conocer al menos los mensajes básicos de ► Note On, Note Off, y comprender los DATA Bytes 1 y 2 (Número de nota y velocidad) ► Comprender los mensajes de Control Change, no solo en su estructura sino en cómo son implementados por los periféricos. ► Comprender los mensajes auxiliares de seguridad y de sincronización. ► Saber que, en un DAW, el ÚNICO editor que nos muestra TODOS los datos MIDI es la LISTA DE EVENTOS. El resto de los editores: NO.
Processing audio data
Processing audio data involves applying effects or transformations to the audio regions or clips, such as software plugins or hardware processors. There are various plugins and processors available, including effects, dynamics processors, EQs, and pitch and time processors. Common processing techniques for audio data include adding reverb, delay, chorus, or other effects to create ambience; applying compression, EQ, distortion, or other effects to alter tone and dynamics; using filters, envelopes, LFOs, or other modulators to shape the sound; using pitch shifters and time stretchers to change the pitch or tempo; and applying noise reduction, normalization, dithering, or other utilities for improved quality and compatibility.
♦ Procesar datos de Audio, genera importantes intercambios de información. ♦ Tamaño considerable en memoria, y uso de DSP, por lo tanto, generan carga en CPU. ♦ Por esas dos causas es que necesitamos tener en cuenta algunos factores prácticos: ► Espacio en RAM ► Espacio en Disco Rígido ► Potencia de Microprocesador ► Comunicación con la placa de sonido rápida: ASIO de preferencia. ♦ Algunos de esos cambios serán generados por el uso de AUDIO PLUGINS, que consumirán recursos del hardware inevitablemente. ♦ Todas esas exigencias, sumadas a la calidad de Frecuencia de muestreo y Resolución en bits, tendrán consecuencias importantes en la LATENCIA. Algo que todos hemos padecido en algún momento.
When editing audio, often it's helpful to add up to a 10 ms fade-in/out to make sure there aren't any clicks or pops where you cut the audio region.
When I'm editing MIDI and audio, I've got a few go-to tricks. For MIDI, I start by quantizing, but not too perfectly – I like to keep some human feel. I'll often use velocity to add dynamics and make things sound more natural. With audio, I'm all about cleaning up before processing. I'll cut out any dead space, fix any obvious timing issues, and deal with any pops or clicks. For both, I'm a big fan of grouping similar tracks and processing them together. It helps keep everything cohesive. And I always make sure to save original versions before going wild with edits. That way, I can always go back if I need to. It's all about finding that sweet spot between polished and natural.
More relevant reading
``