Deepspeech 0.5.0-alpha.11 avec Tensorflow v1.13.1

Étape 1 – Préparation

Mon setup:

Machine physique pour faire le projet

Configuration de la VM:

La VM ubuntu 18.04

J’ai installé Ubuntu 18.04 minimal (seulement openssh a été sélectionné lors de l’installation)

Voici le lien pour télécharger Ubuntu : http://mirror.it.ubc.ca/ubuntu-releases/18.04.2/ubuntu-18.04.2-live-server-amd64.iso

Suite à l’installation il faut mettre à jour le système:

sudo apt-get update && sudo apt-get upgrade -y

Étape 2 – Installation des dépendances

Pour cloner les fichiers de DeepSpeech, il faut télécharger git-lfs:

Étape 2.1 – Installation de git-lfs

aller à ce lien : https://github.com/git-lfs/git-lfs/releases/tag/v2.7.2

Il est préférable que vous preniez la latest version,

wget https://packagecloud.io/github/git-lfs/packages/debian/stretch/git-lfs_2.7.2_amd64.deb/download
sudo dpkg -I download

Étape 2.2 – Autres dépendances

sudo apt-get install -y build-essential libboost-all-dev cmake zlib1g-dev libbz2-dev liblzma-dev libsox-dev

Étape 3 – Installation de DeepSpeech

Voici le lien du projet deepspeech: https://github.com/mozilla/DeepSpeech

sur la machine Linux,

mkdir project
cd project
git clone https://github.com/mozilla/DeepSpeech.git
cd DeepSpeech

Vous devriez avoir tous les fichiers nécessaires pour continuer,

mon setup ne supporte pas AVX, donc ça ne fonctionnera pas…

il faut commencer par installer pip3 et faire l’upgrade

sudo apt install -y python3-pip
pip3 install --upgrade pip==9.0.3

à partir de la racine de DeepSpeech:

sudo pip3 install -r requirements.txt

il faut installer le decoder:

sudo pip3 install $(python3 util/taskcluster.py --decoder)

ensuite pour tensorflow,

voici comment j’ai réussi à le faire fonctionner pour ma VM:

Si cela ne fonctionne pas pour votre setup,

il y a plusieurs versions disponibles : https://github.com/yaroslavvb/tensorflow-community-wheels/issues

pip3 uninstall tensorflow
pip3 install https://github.com/Tzeny/tensorflowbuilds/raw/master/tensorflow-1.13.1-cp36-cp36m-linux_x86_64.whl

Étape 4 – Test

Maintenant, vous pouvez tester votre setup,

à partir de la racine de DeepSpeech,

Note, validez que votre version de python est 3.6, si elle est différente,

ln -sf /usr/bin/python3.6 /usr/bin/python

Voici la commande et l’output de celle-ci:

De mon côté, la première fois que j’ai lancé la commande, ma version de numpy n’était pas la bonne:

sudo pip install -U numpy
tgingras@trainer01:~/project/DeepSpeech$ ./bin/run-ldc93s1.sh 
+ [ ! -f DeepSpeech.py ]
+ [ ! -f data/ldc93s1/ldc93s1.csv ]
+ [ -d  ]
+ python -c from xdg import BaseDirectory as xdg; print(xdg.save_data_path("deepspeech/ldc93s1"))
+ checkpoint_dir=/home/tgingras/.local/share/deepspeech/ldc93s1
+ export CUDA_VISIBLE_DEVICES=0
+ python -u DeepSpeech.py --noshow_progressbar --train_files data/ldc93s1/ldc93s1.csv --test_files data/ldc93s1/ldc93s1.csv --train_batch_size 1 --test_batch_size 1 --n_hidden 100 --epochs 200 --checkpoint_dir /home/tgingras/.local/share/deepspeech/ldc93s1
WARNING:tensorflow:From /home/tgingras/.local/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py:429: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version.
Instructions for updating:
tf.py_func is deprecated in TF V2. Instead, use
    tf.py_function, which takes a python function which manipulates tf eager
    tensors instead of numpy arrays. It's easy to convert a tf eager tensor to
    an ndarray (just call tensor.numpy()) but having access to eager tensors
    means `tf.py_function`s can use accelerators such as GPUs as well as
    being differentiable using a gradient tape.
    
WARNING:tensorflow:From /home/tgingras/.local/lib/python3.6/site-packages/tensorflow/python/data/ops/iterator_ops.py:358: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /home/tgingras/.local/lib/python3.6/site-packages/tensorflow/contrib/rnn/python/ops/lstm_ops.py:696: to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:From /home/tgingras/.local/lib/python3.6/site-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
I Restored variables from most recent checkpoint at /home/tgingras/.local/share/deepspeech/ldc93s1/train-400, step 400
I STARTING Optimization
I Training epoch 0...
I Finished training epoch 0 - loss: 0.983112
I Training epoch 1...
I Finished training epoch 1 - loss: 1.564317
I Training epoch 2...
I Finished training epoch 2 - loss: 0.557225
I Training epoch 3...
I Finished training epoch 3 - loss: 0.443870
I Training epoch 4...
I Finished training epoch 4 - loss: 1.085620
I Training epoch 5...
I Finished training epoch 5 - loss: 0.806546
WARNING:tensorflow:From /home/tgingras/.local/lib/python3.6/site-packages/tensorflow/python/training/saver.py:966: remove_checkpoint (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to delete files with this prefix.
I Training epoch 6...
I Finished training epoch 6 - loss: 0.768998
I Training epoch 7...
I Finished training epoch 7 - loss: 1.181353
I Training epoch 8...
I Finished training epoch 8 - loss: 0.472965
I Training epoch 9...
I Finished training epoch 9 - loss: 2.304265
I Training epoch 10...
I Finished training epoch 10 - loss: 0.724547
I Training epoch 11...
I Finished training epoch 11 - loss: 0.564534
I Training epoch 12...
I Finished training epoch 12 - loss: 1.010434
I Training epoch 13...
I Finished training epoch 13 - loss: 0.842359
I Training epoch 14...
I Finished training epoch 14 - loss: 0.604926
I Training epoch 15...
I Finished training epoch 15 - loss: 0.670776
I Training epoch 16...
I Finished training epoch 16 - loss: 0.793744
I Training epoch 17...
I Finished training epoch 17 - loss: 0.432419
I Training epoch 18...
I Finished training epoch 18 - loss: 0.624075
I Training epoch 19...
I Finished training epoch 19 - loss: 1.482327
I Training epoch 20...
I Finished training epoch 20 - loss: 0.546156
I Training epoch 21...
I Finished training epoch 21 - loss: 0.759855
I Training epoch 22...
I Finished training epoch 22 - loss: 0.691648
I Training epoch 23...
I Finished training epoch 23 - loss: 0.711924
I Training epoch 24...
I Finished training epoch 24 - loss: 0.359982
I Training epoch 25...
I Finished training epoch 25 - loss: 0.645118
I Training epoch 26...
I Finished training epoch 26 - loss: 0.482775
I Training epoch 27...
I Finished training epoch 27 - loss: 0.408742
I Training epoch 28...
I Finished training epoch 28 - loss: 0.476645
I Training epoch 29...
I Finished training epoch 29 - loss: 0.544277
I Training epoch 30...
I Finished training epoch 30 - loss: 1.156650
I Training epoch 31...
I Finished training epoch 31 - loss: 0.428750
I Training epoch 32...
I Finished training epoch 32 - loss: 0.539975
I Training epoch 33...
I Finished training epoch 33 - loss: 0.622519
I Training epoch 34...
I Finished training epoch 34 - loss: 0.423732
I Training epoch 35...
I Finished training epoch 35 - loss: 0.486777
I Training epoch 36...
I Finished training epoch 36 - loss: 0.409215
I Training epoch 37...
I Finished training epoch 37 - loss: 0.714880
I Training epoch 38...
I Finished training epoch 38 - loss: 0.962144
I Training epoch 39...
I Finished training epoch 39 - loss: 0.556101
I Training epoch 40...
I Finished training epoch 40 - loss: 0.322158
I Training epoch 41...
I Finished training epoch 41 - loss: 0.613771
I Training epoch 42...
I Finished training epoch 42 - loss: 0.596573
I Training epoch 43...
I Finished training epoch 43 - loss: 1.004774
I Training epoch 44...
I Finished training epoch 44 - loss: 0.300449
I Training epoch 45...
I Finished training epoch 45 - loss: 0.378002
I Training epoch 46...
I Finished training epoch 46 - loss: 0.624511
I Training epoch 47...
I Finished training epoch 47 - loss: 0.676663
I Training epoch 48...
I Finished training epoch 48 - loss: 0.351976
I Training epoch 49...
I Finished training epoch 49 - loss: 0.335789
I Training epoch 50...
I Finished training epoch 50 - loss: 0.386625
I Training epoch 51...
I Finished training epoch 51 - loss: 0.380892
I Training epoch 52...
I Finished training epoch 52 - loss: 0.624659
I Training epoch 53...
I Finished training epoch 53 - loss: 0.585071
I Training epoch 54...
I Finished training epoch 54 - loss: 0.510913
I Training epoch 55...
I Finished training epoch 55 - loss: 0.285631
I Training epoch 56...
I Finished training epoch 56 - loss: 0.318445
I Training epoch 57...
I Finished training epoch 57 - loss: 0.373123
I Training epoch 58...
I Finished training epoch 58 - loss: 0.956714
I Training epoch 59...
I Finished training epoch 59 - loss: 0.342052
I Training epoch 60...
I Finished training epoch 60 - loss: 0.388205
I Training epoch 61...
I Finished training epoch 61 - loss: 0.330497
I Training epoch 62...
I Finished training epoch 62 - loss: 0.222877
I Training epoch 63...
I Finished training epoch 63 - loss: 0.294510
I Training epoch 64...
I Finished training epoch 64 - loss: 0.383408
I Training epoch 65...
I Finished training epoch 65 - loss: 0.362585
I Training epoch 66...
I Finished training epoch 66 - loss: 0.350256
I Training epoch 67...
I Finished training epoch 67 - loss: 0.471003
I Training epoch 68...
I Finished training epoch 68 - loss: 0.322495
I Training epoch 69...
I Finished training epoch 69 - loss: 0.811096
I Training epoch 70...
I Finished training epoch 70 - loss: 0.391573
I Training epoch 71...
I Finished training epoch 71 - loss: 0.362640
I Training epoch 72...
I Finished training epoch 72 - loss: 0.269656
I Training epoch 73...
I Finished training epoch 73 - loss: 0.442807
I Training epoch 74...
I Finished training epoch 74 - loss: 0.320067
I Training epoch 75...
I Finished training epoch 75 - loss: 0.284552
I Training epoch 76...
I Finished training epoch 76 - loss: 0.319485
I Training epoch 77...
I Finished training epoch 77 - loss: 0.286944
I Training epoch 78...
I Finished training epoch 78 - loss: 0.554994
I Training epoch 79...
I Finished training epoch 79 - loss: 0.212485
I Training epoch 80...
I Finished training epoch 80 - loss: 0.240692
I Training epoch 81...
I Finished training epoch 81 - loss: 0.295989
I Training epoch 82...
I Finished training epoch 82 - loss: 0.392743
I Training epoch 83...
I Finished training epoch 83 - loss: 0.319089
I Training epoch 84...
I Finished training epoch 84 - loss: 0.322825
I Training epoch 85...
I Finished training epoch 85 - loss: 0.203434
I Training epoch 86...
I Finished training epoch 86 - loss: 0.332978
I Training epoch 87...
I Finished training epoch 87 - loss: 0.511959
I Training epoch 88...
I Finished training epoch 88 - loss: 0.328205
I Training epoch 89...
I Finished training epoch 89 - loss: 0.259763
I Training epoch 90...
I Finished training epoch 90 - loss: 0.411349
I Training epoch 91...
I Finished training epoch 91 - loss: 0.255462
I Training epoch 92...
I Finished training epoch 92 - loss: 0.280341
I Training epoch 93...
I Finished training epoch 93 - loss: 0.203527
I Training epoch 94...
I Finished training epoch 94 - loss: 0.201110
I Training epoch 95...
I Finished training epoch 95 - loss: 0.223869
I Training epoch 96...
I Finished training epoch 96 - loss: 0.238241
I Training epoch 97...
I Finished training epoch 97 - loss: 0.267583
I Training epoch 98...
I Finished training epoch 98 - loss: 0.182873
I Training epoch 99...
I Finished training epoch 99 - loss: 0.295621
I Training epoch 100...
I Finished training epoch 100 - loss: 0.190272
I Training epoch 101...
I Finished training epoch 101 - loss: 0.244154
I Training epoch 102...
I Finished training epoch 102 - loss: 0.232303
I Training epoch 103...
I Finished training epoch 103 - loss: 0.249804
I Training epoch 104...
I Finished training epoch 104 - loss: 0.141074
I Training epoch 105...
I Finished training epoch 105 - loss: 0.397383
I Training epoch 106...
I Finished training epoch 106 - loss: 0.296323
I Training epoch 107...
I Finished training epoch 107 - loss: 0.200690
I Training epoch 108...
I Finished training epoch 108 - loss: 0.245546
I Training epoch 109...
I Finished training epoch 109 - loss: 0.158953
I Training epoch 110...
I Finished training epoch 110 - loss: 0.270074
I Training epoch 111...
I Finished training epoch 111 - loss: 0.220921
I Training epoch 112...
I Finished training epoch 112 - loss: 0.183297
I Training epoch 113...
I Finished training epoch 113 - loss: 0.164594
I Training epoch 114...
I Finished training epoch 114 - loss: 0.210777
I Training epoch 115...
I Finished training epoch 115 - loss: 0.214979
I Training epoch 116...
I Finished training epoch 116 - loss: 0.206931
I Training epoch 117...
I Finished training epoch 117 - loss: 0.278687
I Training epoch 118...
I Finished training epoch 118 - loss: 0.225743
I Training epoch 119...
I Finished training epoch 119 - loss: 0.183664
I Training epoch 120...
I Finished training epoch 120 - loss: 0.279458
I Training epoch 121...
I Finished training epoch 121 - loss: 0.271171
I Training epoch 122...
I Finished training epoch 122 - loss: 0.235194
I Training epoch 123...
I Finished training epoch 123 - loss: 0.203871
I Training epoch 124...
I Finished training epoch 124 - loss: 0.201225
I Training epoch 125...
I Finished training epoch 125 - loss: 0.213906
I Training epoch 126...
I Finished training epoch 126 - loss: 0.249486
I Training epoch 127...
I Finished training epoch 127 - loss: 0.207683
I Training epoch 128...
I Finished training epoch 128 - loss: 0.144219
I Training epoch 129...
I Finished training epoch 129 - loss: 0.148312
I Training epoch 130...
I Finished training epoch 130 - loss: 0.137584
I Training epoch 131...
I Finished training epoch 131 - loss: 0.160867
I Training epoch 132...
I Finished training epoch 132 - loss: 0.189410
I Training epoch 133...
I Finished training epoch 133 - loss: 0.163287
I Training epoch 134...
I Finished training epoch 134 - loss: 0.208483
I Training epoch 135...
I Finished training epoch 135 - loss: 0.227397
I Training epoch 136...
I Finished training epoch 136 - loss: 0.194304
I Training epoch 137...
I Finished training epoch 137 - loss: 0.181752
I Training epoch 138...
I Finished training epoch 138 - loss: 0.149118
I Training epoch 139...
I Finished training epoch 139 - loss: 0.177778
I Training epoch 140...
I Finished training epoch 140 - loss: 0.278596
I Training epoch 141...
I Finished training epoch 141 - loss: 0.236488
I Training epoch 142...
I Finished training epoch 142 - loss: 0.218113
I Training epoch 143...
I Finished training epoch 143 - loss: 0.183760
I Training epoch 144...
I Finished training epoch 144 - loss: 0.154252
I Training epoch 145...
I Finished training epoch 145 - loss: 0.133591
I Training epoch 146...
I Finished training epoch 146 - loss: 0.317931
I Training epoch 147...
I Finished training epoch 147 - loss: 0.221187
I Training epoch 148...
I Finished training epoch 148 - loss: 0.213546
I Training epoch 149...
I Finished training epoch 149 - loss: 0.125132
I Training epoch 150...
I Finished training epoch 150 - loss: 0.172604
I Training epoch 151...
I Finished training epoch 151 - loss: 0.221345
I Training epoch 152...
I Finished training epoch 152 - loss: 0.207469
I Training epoch 153...
I Finished training epoch 153 - loss: 0.319682
I Training epoch 154...
I Finished training epoch 154 - loss: 0.149294
I Training epoch 155...
I Finished training epoch 155 - loss: 0.172080
I Training epoch 156...
I Finished training epoch 156 - loss: 0.141249
I Training epoch 157...
I Finished training epoch 157 - loss: 0.177164
I Training epoch 158...
I Finished training epoch 158 - loss: 0.231256
I Training epoch 159...
I Finished training epoch 159 - loss: 0.207067
I Training epoch 160...
I Finished training epoch 160 - loss: 0.189554
I Training epoch 161...
I Finished training epoch 161 - loss: 0.281833
I Training epoch 162...
I Finished training epoch 162 - loss: 0.159726
I Training epoch 163...
I Finished training epoch 163 - loss: 0.241460
I Training epoch 164...
I Finished training epoch 164 - loss: 0.125390
I Training epoch 165...
I Finished training epoch 165 - loss: 0.187113
I Training epoch 166...
I Finished training epoch 166 - loss: 0.162001
I Training epoch 167...
I Finished training epoch 167 - loss: 0.213872
I Training epoch 168...
I Finished training epoch 168 - loss: 0.123111
I Training epoch 169...
I Finished training epoch 169 - loss: 0.221557
I Training epoch 170...
I Finished training epoch 170 - loss: 0.161502
I Training epoch 171...
I Finished training epoch 171 - loss: 0.136605
I Training epoch 172...
I Finished training epoch 172 - loss: 0.199177
I Training epoch 173...
I Finished training epoch 173 - loss: 0.187124
I Training epoch 174...
I Finished training epoch 174 - loss: 0.183505
I Training epoch 175...
I Finished training epoch 175 - loss: 0.252856
I Training epoch 176...
I Finished training epoch 176 - loss: 0.148036
I Training epoch 177...
I Finished training epoch 177 - loss: 0.160260
I Training epoch 178...
I Finished training epoch 178 - loss: 0.138608
I Training epoch 179...
I Finished training epoch 179 - loss: 0.267728
I Training epoch 180...
I Finished training epoch 180 - loss: 0.127815
I Training epoch 181...
I Finished training epoch 181 - loss: 0.107392
I Training epoch 182...
I Finished training epoch 182 - loss: 0.098068
I Training epoch 183...
I Finished training epoch 183 - loss: 0.177044
I Training epoch 184...
I Finished training epoch 184 - loss: 0.132390
I Training epoch 185...
I Finished training epoch 185 - loss: 0.188128
I Training epoch 186...
I Finished training epoch 186 - loss: 0.122193
I Training epoch 187...
I Finished training epoch 187 - loss: 0.147202
I Training epoch 188...
I Finished training epoch 188 - loss: 0.113221
I Training epoch 189...
I Finished training epoch 189 - loss: 0.161354
I Training epoch 190...
I Finished training epoch 190 - loss: 0.171314
I Training epoch 191...
I Finished training epoch 191 - loss: 0.142675
I Training epoch 192...
I Finished training epoch 192 - loss: 0.838540
I Training epoch 193...
I Finished training epoch 193 - loss: 0.163848
I Training epoch 194...
I Finished training epoch 194 - loss: 0.165025
I Training epoch 195...
I Finished training epoch 195 - loss: 0.243673
I Training epoch 196...
I Finished training epoch 196 - loss: 0.620380
I Training epoch 197...
I Finished training epoch 197 - loss: 0.121340
I Training epoch 198...
I Finished training epoch 198 - loss: 0.279436
I Training epoch 199...
I Finished training epoch 199 - loss: 0.242714
I FINISHED optimization in 0:00:42.099750
I Restored variables from most recent checkpoint at /home/tgingras/.local/share/deepspeech/ldc93s1/train-600, step 600
Testing model on data/ldc93s1/ldc93s1.csv
I Test epoch...
Test on data/ldc93s1/ldc93s1.csv - WER: 0.000000, CER: 0.000000, loss: 0.075669
--------------------------------------------------------------------------------
WER: 0.000000, CER: 0.000000, loss: 0.075669
 - src: "she had your dark suit in greasy wash water all year"
 - res: "she had your dark suit in greasy wash water all year"
--------------------------------------------------------------------------------

Étape 6 – Erreurs survenues

Erreur #1

[root@johntrainer DeepSpeech]# ./DeepSpeech.py
 
2019-06-03 16:16:38.858342: F tensorflow/core/platform/cpu_feature_guard.cc:37] The TensorFlow library was compiled to use AVX instructions, but these aren't available on your machine.
 
Aborted

Solution

Installer une version de tensorflow 1.13.1 qui n’a pas AVX

Erreur #2

[root@johntrainer DeepSpeech]# ./DeepSpeech.py
 
Traceback (most recent call last):
 
  File "./DeepSpeech.py", line 18, in <module>
 
    from ds_ctcdecoder import ctc_beam_search_decoder, Scorer
 
ModuleNotFoundError: No module named 'ds_ctcdecoder

Solution

Installer le décodeur comme indiqué plus haut.

Erreur #3

[root@johntrainer DeepSpeech]# ./bin/run-ldc93s1.sh
 
+ '[' '!' -f DeepSpeech.py ']'
 
+ '[' '!' -f data/ldc93s1/ldc93s1.csv ']'
 
+ echo 'Downloading and preprocessing LDC93S1 example data, saving in ./data/ldc93s1.'
 
Downloading and preprocessing LDC93S1 example data, saving in ./data/ldc93s1.
 
+ python -u bin/import_ldc93s1.py ./data/ldc93s1
 
No path "./data/ldc93s1" - creating ...
 
No archive "./data/ldc93s1/LDC93S1.wav" - downloading...
 
Progress |                                                                                                                                                                                                                   | N/A% completedNo archive "./data/ldc93s1/LDC93S1.txt" - downloading...
 
Progress |###################################################################################################################################################################################################################| 100% completed
 
Progress |###################################################################################################################################################################################################################| 100% completed
 
+ '[' -d '' ']'
 
++ python -c 'from xdg import BaseDirectory as xdg; print(xdg.save_data_path("deepspeech/ldc93s1"))'
 
+ checkpoint_dir=/root/.local/share/deepspeech/ldc93s1
 
+ export CUDA_VISIBLE_DEVICES=0
 
+ CUDA_VISIBLE_DEVICES=0
 
+ python -u DeepSpeech.py --noshow_progressbar --train_files data/ldc93s1/ldc93s1.csv --test_files data/ldc93s1/ldc93s1.csv --train_batch_size 1 --test_batch_size 1 --n_hidden 100 --epochs 200 --checkpoint_dir /root/.local/share/deepspeech/ldc93s1
 
Traceback (most recent call last):
 
  File "DeepSpeech.py", line 829, in <module>
 
    tf.app.run(main)
 
  File "/usr/local/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 124, in run
 
    _sys.exit(main(argv))
 
  File "DeepSpeech.py", line 813, in main
 
    train()
 
  File "DeepSpeech.py", line 370, in train
 
    cache_path=FLAGS.feature_cache)
 
  File "/root/project/DeepSpeech/util/feeding.py", line 96, in create_dataset
 
    .map(entry_to_features, num_parallel_calls=tf.data.experimental.AUTOTUNE)
 
AttributeError: module 'tensorflow.python.data' has no attribute 'experimental'
 
[root@johntrainer DeepSpeech]#

Solution

Cette erreur a été causé car j’ai essayé avec tensorflow version 1.5, nous avons besoin de la version 1.13.1

Erreur #4

si vous voyez cette erreur lors du lancement de la commande: ./bin/run-ldc93s1.sh 

tgingras@trainer02:~/project/DeepSpeech$ ./bin/run-ldc93s1.sh 
+ [ ! -f DeepSpeech.py ]
+ [ ! -f data/ldc93s1/ldc93s1.csv ]
+ [ -d  ]
+ python -c from xdg import BaseDirectory as xdg; print(xdg.save_data_path("deepspeech/ldc93s1"))
+ checkpoint_dir=/home/tgingras/.local/share/deepspeech/ldc93s1
+ export CUDA_VISIBLE_DEVICES=0
+ python -u DeepSpeech.py --noshow_progressbar --train_files data/ldc93s1/ldc93s1.csv --test_files data/ldc93s1/ldc93s1.csv --train_batch_size 1 --test_batch_size 1 --n_hidden 100 --epochs 200 --checkpoint_dir /home/tgingras/.local/share/deepspeech/ldc93s1
ModuleNotFoundError: No module named 'numpy.core._multiarray_umath'
ImportError: numpy.core.multiarray failed to import

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "<frozen importlib._bootstrap>", line 968, in _find_and_load
SystemError: <class '_frozen_importlib._ModuleLockManager'> returned a result with an error set
ImportError: numpy.core._multiarray_umath failed to import
ImportError: numpy.core.umath failed to import
2019-06-06 21:34:30.030276: F tensorflow/python/lib/core/bfloat16.cc:675] Check failed: PyBfloat16_Type.tp_base != nullptr 
Aborted (core dumped)

Solution

Installer numpy 1.16.4

sudo pip install -U numpy
### OUTPUT ###
...
Collecting numpy
  Downloading 
....
Installing collected packages: numpy
  Found existing installation: numpy 1.15.4
    Uninstalling numpy-1.15.4:
      Successfully uninstalled numpy-1.15.4
Successfully installed numpy-1.16.4

Vidéos

Comment déployer deepspeech

Prochaines Étapes

  • Préparer des enregistrements pour lancer une vrai job.
  • Créer une plateforme pour faciliter le process pour trainer les entrées et obtenir le fichier final.
  • Essayer de faire un cluster avec ou sans Docker.
  • Installer et configurer les outils pour créer le fichier de training.

Laisser un commentaire