maad.features.temporal_events
- maad.features.temporal_events(s, fs, dB_threshold=3, rejectDuration=None, mode='fast', Nt=512, display=False, **kwargs)[source]
Compute the acoustic event index from an audio signal [1] [2].
An acoustic event corresponds to the period of the signal above a threshold. An acoustic event could be short (at list one point if rejectDuration is None) or very long (the duration of the entire audio). Two acoustic events are separated by a period with low audio signal (ie below the threshold).
Four values are computed with this function:
EVNtFraction : Fraction: events duration over total duration
EVNmean : mean events duration (s)
EVNcount : number of events per s
EVN : binary vector or matrix with 1 corresponding to event position
- Parameters:
- s1D array of floats
audio to process (wav)
- fsInteger
sampling frequency in Hz
- dB_thresholdscalar, optional, default is 3dB
data >Threshold is considered to be an event if the length is > rejectLength
- rejectDurationscalar, optional, default is None
event shorter than rejectDuration are discarded duration is in s
- modestr, optional, default is “fast”
Select the mode to compute the envelope of the audio waveform
- “fast”
The sound is first divided into frames (2d) using the function _wave2timeframes(s), then the max of each frame gives a good approximation of the envelope.
- “Hilbert”
Estimation of the envelope from the Hilbert transform. The method is slow
- Ntinteger, optional, default is 512
Size of each frame. The largest, the highest is the approximation.
- displayboolean, optional, default is False
Display the selected events on the audio waveform
- kwargs, optional.
This parameter is used by plt.plot
- Returns:
- EVNtFraction :scalar
Fraction: events duration over total duration
- EVNmean: scalar
mean events duration in s
- EVNcount: scalar
number of events per s
- EVN: ndarray of floats
binary vector or matrix. 1 corresponds to event 0 corresponds to background
References
[1]Towsey, Michael (2013), Noise Removal from Waveforms and Spectrograms Derived from Natural Recordings of the Environment. Queensland University of Technology, Brisbane. https://eprints.qut.edu.au/61399/4/61399.pdf
[2]QUT : https://github.com/QutEcoacoustics/audio-analysis. Michael Towsey, Anthony Truskinger, Mark Cottman-Fields, & Paul Roe. (2018, March 5). Ecoacoustics Audio Analysis Software v18.03.0.41 (Version v18.03.0.41). Zenodo. http://doi.org/10.5281/zenodo.1188744
Examples
>>> import maad >>> s, fs = maad.sound.load('../data/spinetail.wav') >>> EVNtFract, EVNmean, EVNcount, _ = maad.features.temporal_events (s, fs, 6) >>> print('EVNtFract: %2.2f / EVNmean: %2.2f / EVNcount: %2.0f' % (EVNtFract, EVNmean, EVNcount)) EVNtFract: 0.37 / EVNmean: 0.08 / EVNcount: 5