bob.spear

HomePage: https://pypi.python.org/pypi/bob.spear

Author: Elie Khoury

Download: https://pypi.python.org/packages/source/b/bob.spear/bob.spear-1.1.2.zip

        BOB SPEAR: A Speaker Recognition Toolkit based on Bob
=====================================================

SPEAR is a speaker recognition toolkit based on Bob, designed to run speaker verification/recognition
experiments . It's originally based on facereclib tool:
https://pypi.python.org/pypi/facereclib

`SPEAR`_ is designed in a way that it should be easily possible to execute experiments combining different mixtures of:

* Speaker Recognition databases and their according protocols
* Voice activity detection
* Feature extraction
* Recognition/Verification tools

In any case, results of these experiments will directly be comparable when the same dataset is employed.

`SPEAR`_ is adapted to run speaker verification/recognition experiments with the SGE grid infrastructure at Idiap.


If you use this package and/or its results, please cite the following
publications:

1. The Spear paper published at ICASSP 2014::

    @inproceedings{spear,
      author = {Khoury, E. and El Shafey, L. and Marcel, S.},
      title = {Spear: An open source toolbox for speaker recognition based on {B}ob},
      booktitle = {IEEE Intl. Conf. on Acoustics, Speech and Signal Processing (ICASSP)},
      year = {2014},
      url = {http://publications.idiap.ch/downloads/papers/2014/Khoury_ICASSP_2014.pdf},
    }

2. Bob as the core framework used to run the experiments::

    @inproceedings{Anjos_ACMMM_2012,
      author = {A. Anjos and L. El Shafey and R. Wallace and M. G\"unther and C. McCool and S. Marcel},
      title = {Bob: a free signal processing and machine learning toolbox for researchers},
      year = {2012},
      month = oct,
      booktitle = {20th ACM Conference on Multimedia Systems (ACMMM), Nara, Japan},
      publisher = {ACM Press},
      url = {http://publications.idiap.ch/downloads/papers/2012/Anjos_Bob_ACMMM12.pdf},
    }


I- Installation
----------------

Just download this package and decompress it locally::

  $ wget https://pypi.python.org/packages/source/b/bob.spear/bob.spear-1.1.2.zip
  $ unzip bob.spear-1.1.2.zip
  $ cd spear

`spear`_ is based on the `BuildOut`_ python linking system. You only need to use buildout to bootstrap and have a working environment ready for
experiments::

  $ python bootstrap
  $ ./bin/buildout

This also requires that bob (>= 1.2.0) is installed.


II- Running experiments
------------------------

The above two commands will automatically download all desired packages (`gridtk`_, `xbob.sox`_ and `xbob.db.verification.filelist`_ ) from `pypi`_ and generate some scripts in the bin directory, including the following scripts::
  
   $ bin/spkverif_gmm.py
   $ bin/spkverif_isv.py
   $ bin/spkverif_jfa.py
   $ bin/spkverif_ivector.py
   $ bin/para_ubm_spkverif_isv.py
   $ bin/para_ubm_spkverif_ivector.py
   $ bin/para_ubm_spkverif_gmm.py
   $ bin/fusion_llr.py
   $ bin/evaluate.py
   $ bin/det.py
   
The first four toolchains are the basic toolchains for GMM, ISV, JFA and I-Vector. The next three toolchains are the parallel implementation of GMM, ISV, and I-Vector.
 
To use the 7 first (main) toolchains you have to specify at least four command line parameters (see also the ``--help`` option):

* ``--database``: The configuration file for the database
* ``--preprocessing``: The configuration file for Voice Activity Detection
* ``--feature-extraction``: The configuration file for feature extraction
* ``--tool-chain``: The configuration file for the speaker verification tool chain

If you are not at Idiap, please precise the TEMP and USER directories:

* ``--temp-directory``: This typically contains the features, the UBM model, the client models, etc.
* ``--user-directory``: This will contain the output scores (in text format)

If you want to run the experiments in the GRID at Idiap or any equivalent SGE, you can simply specify:

* ``--grid``: The configuration file for the grid setup.

If no grid configuration file is specified, the experiment is run sequentially on the local machine.
For several datasets, feature types, recognition algorithms, and grid requirements the `SPEAR`_ provides these