Join us today at 9:30 AM EDT to find out why MeteorJS chose Argon 2 on Ep. 75 of TWIM
Extremely bad audio for @storyteller unfortunately? Seems like for the whole hour.
Any possibility of post-production for audio repair/suppression of serious clipping?
Apologies about that. Not sure if my voice or if Internet. I’m moving to a new place with a more stable Internet, so we see if that fixes it.
@alimgafar not sure if OneStream has the recording saved locally and if the audio is any better.
Unfortunately, this is one of the drawbacks with amateur livestreaming. We don’t have the discipline of performing sound checks before the session. And what we see and hear in the studio sometimes doesn’t match the stream.
Onestream does not make a local recording of our livestreams. The issues are with audio clipping and to a lesser extent, with network latency. There are settings for your microphone in Onestream that you can use to automatically equalize your audio.
So unfortunately, there isn’t any way to recover the faulty audio.
Generally speaking, testing your settings on the platform before livestreaming will help make your audio better.
We’ll have to make an effort in future episodes to perform sound checks before going live.
Thanks for sharing the issue.
As an audiophile ( sound engineer and mastering guy in a past life ) I feel you @alimgafar
Remedies; and the urgency, since it causes a physical memory to be worked against in the future
FWIW, for posterity, if a piece is valuable enough to the long-term ( such as a discussion of bcrypt
removal ) I find ffmpeg
can separate out the audio from the video, then we can repair it ( at least to remove the jarring effect of clipping versus make it seamless since it is an incomplete stream ) … and after that we can merge the cleaned up audio back into the audio-free video, with perfect alignment to mouths moving, etc.
Repairs look like using a noise suppressor or running another processor over it ( whether in ffmpeg
or an audio application like Audacity
) before re-merging the audio into the video stream, then re-submitting the video over the prior one. One particular recommended approach is using the afftdn
( Audible Noise Filter ) in ffmpeg
… that involves just running a sample of the worst section through the filter, then using that profile to filter the whole.
Obviously fixing the root issue is priority but for pieces which were not saved locally in raw form, one can even clean up the published video. And it is easy to automate with a script if desired. Mostly used for removing background noise but can be used to isolate other types of noise…
Often just normalizing the audio helps since the visceral reaction to spikes can be very off-putting like pavlov’s dog in reverse where a viewer may experience a physical aversion to future streams and need to be deconditioned, otherwise will avoid the streams.
I favor OSS tools ( like ffmpeg
and Audacity
) so would also recommend Shotcut
to clean up wasted airtime also, since the ‘snappy’ sessions stay memory-resident better and keep the audience… But there are way more editing suites, and far better ones.
Lately with transcription ( as with whisper
derivatives, especially vax
and ctranslate2
) it is possible to feed that into a script that gives clear cut-points to make a session snappier, or to isolate trouble spots easily versus need to spend 3 minutes per 1 minute on post-production, which is time/cost preventative
I put this out there since post-production seems to be a key way to make the TWIM
sessions ‘pop’ more and grow audience. A lot of work seems to go into those… and getting people in the same ‘room’ can likely never be done over, so I find a bag of tricks to recover media segments is key to moving forward while not holding anyone back… and without an audio slave
Hope you get to stable internet access @storyteller
My move to new place with more stable and faster Internet is done.