A library for working with audio datasets
No description set
- Sources inherited from project devel:languages:python:numeric
- Devel package for openSUSE:Factory
- Links to openSUSE:Factory / python-audiomate
- Download package
-
Checkout Package
osc -A https://api.opensuse.org checkout home:bnavigator:numpy/python-audiomate && cd $_
- Create Badge
Refresh
Refresh
Source Files
Filename | Size | Changed |
---|---|---|
_link | 0000000124 124 Bytes | |
audiomate-3.0.0.tar.gz | 0000134272 131 KB | |
python-audiomate.changes | 0000003798 3.71 KB | |
python-audiomate.spec | 0000002856 2.79 KB |
Revision 5 (latest revision is 16)
- Update to 3.0.0 + Breaking Changes * Moved label-encoding to its own module (audiomate.encoding). It now provides the processing of full corpora and store it in containers. * Moved audiomate.feeding.PartitioningFeatureIterator to the audiomate.feeding module. * Added audiomate.containers.AudioContainer to store audio tracks in a single file. All container classes are now in a separate module audiomate.containers. * Corpus now contains Tracks not Files anymore. This makes it possible to different kinds of audio sources. Audio from a file is now included using audiomate.tracks.FileTrack. New is the audiomate.tracks.ContainerTrack, which reads data stored in a container. * The audiomate.corpus.io.DefaultReader and the audiomate.corpus.io.DefaultWriter now load and store tracks, that are stored in a container. * All functionality regarding labels was moved to its own module audiomate.annotations. * The class audiomate.tracks.Utterance was moved to the tracks module. + New Features * Introducing the audiomate.feeding module. It provides different tools for accessing container data. Via a audiomate.feeding.Dataset data can be accessed by indices. With a audiomate.feeding.DataIterator one can easily iterate over data, such as frames. * Added processing steps for computing Onset-Strength (audiomate.processing.pipeline.OnsetStrength)) and Tempogram (audiomate.processing.pipeline.Tempogram)). * Introduced audiomate.corpus.validation module, that is used to validate a corpus. * Added reader (audiomate.corpus.io.SWCReader) for the SWC corpus. But it only works for the prepared corpus. * Added function (audiomate.corpus.utils.label_cleaning.merge_consecutive_labels_with_same_values()) for merging consecutive labels with the same value * Added downloader (audiomate.corpus.io.GtzanDownloader) for the GTZAN Music/Speech. * Added audiomate.corpus.assets.Label.tokenized() to get a list of tokens from a label. It basically splits the value and trims whitespace. * Added methods on audiomate.corpus.CorpusView, audiomate.corpus.assets.Utterance and audiomate.corpus.assets.LabelList to get a set of occurring tokens. * Added audiomate.encoding.TokenOrdinalEncoder to encode labels of an utterance by mapping every token of the label to a number. * Create container base class (audiomate.corpus.assets.Container), that can be used to store arbitrary data per utterance. The audiomate.corpus.assets.FeatureContainer is now an extension of the container, that provides functionality especially for features. * Added functions to split utterances and label-lists into multiple parts. (audiomate.corpus.assets.Utterance.split(), audiomate.corpus.assets.LabelList.split()) * Added audiomate.processing.pipeline.AddContext to add context to frames, using previous and subsequent frames. * Added reader (audiomate.corpus.io.MailabsReader) and downloader (audiomate.corpus.io.MailabsDownloader) for the M-AILABS Speech Dataset. + Fixes * [#58] Keep track of number of samples per frame and between frames. Now the correct values will be stored in a Feature-Container, if the processor implements it correctly. * [#72] Fix bug, when reading samples from utterance, using a specific duration, while the utterance end is not defined.
Comments 0