Full metadata record
DC pole | Hodnota | Jazyk |
---|---|---|
dc.contributor.author | Matoušek, Jindřich | |
dc.contributor.author | Hanzlíček, Zdeněk | |
dc.contributor.author | Tihelka, Daniel | |
dc.contributor.author | Méner, Martin | |
dc.date.accessioned | 2015-12-16T07:25:21Z | - |
dc.date.available | 2015-12-16T07:25:21Z | - |
dc.date.issued | 2010 | |
dc.identifier.citation | MATOUŠEK, Jindřich; HANZLÍČEK, Zdeněk; TIHELKA, Daniel; MÉNER, Martin. Automatic dubbing of TV programmes for the hearing impaired. In: Proceedings of the 10th international conference on signal processing, ICSP '10, 24.10.2010 - 28.10.2010. Beijing: IEEE Press, 2010, p. 589-592. ISBN 978-1-4244-5898-1. | en |
dc.identifier.isbn | 978-1-4244-5899-1 | |
dc.identifier.uri | http://www.kky.zcu.cz/cs/publications/MatousekJ_2010_AutomaticDubbingof | |
dc.identifier.uri | http://hdl.handle.net/11025/17016 | |
dc.format | 4 s. | cs |
dc.format.mimetype | application/pdf | |
dc.language.iso | en | en |
dc.publisher | IEEE Press | en |
dc.rights | © Jindřich Matoušek - Zdeněk Hanzlíček - Daniel Tihelka - Martin Méner | cs |
dc.subject | syntéza řeči | cs |
dc.subject | WSOLA | cs |
dc.subject | automatický dabing | cs |
dc.subject | sluchová postižení | cs |
dc.title | Automatic dubbing of TV programmes for the hearing impaired | en |
dc.title.alternative | Automatický dabing televizních pořadů pro sluchově postižené | cs |
dc.type | článek | cs |
dc.type | article | en |
dc.rights.access | openAccess | en |
dc.type.version | publishedVersion | en |
dc.description.abstract-translated | This paper presents experiments with a customisation of a corpus-based unit-selection text-to-speech (TTS) system for automatic dubbing of TV programmes. The project aims at people with hearing impairments as its main goal is to produce a highly intelligible, less-dynamic, and more-undisturbed audio track for TV programmes automatically from subtitles. A two-phase synchronisation process was proposed to cope with audio-video synchronisation issues. These phases include both off-line time compression of all utterances in a source speech corpus used for TTS and on-line time compression of speech that overlaps assigned subtitle time slots. Based on a case study, in which a TTS-generated audio track of a selected movie was analysed, a simplification of to-be-desynchronised subtitle texts was proposed in order to keep time-compression factors in a reasonable extent. In this way, abrupt changes in dynamics of the produced audio track are avoided. | en |
dc.subject.translated | speech synthesis | en |
dc.subject.translated | WSOLA | en |
dc.subject.translated | automatic dubbing | en |
dc.subject.translated | hearing impaired | en |
dc.type.status | Peer-reviewed | en |
Vyskytuje se v kolekcích: | Články / Articles (NTIS) Články / Articles (KIV) |
Soubory připojené k záznamu:
Soubor | Popis | Velikost | Formát | |
---|---|---|---|---|
MatousekJ_2010_AutomaticDubbingof.pdf | Plný text | 243,95 kB | Adobe PDF | Zobrazit/otevřít |
Použijte tento identifikátor k citaci nebo jako odkaz na tento záznam:
http://hdl.handle.net/11025/17016
Všechny záznamy v DSpace jsou chráněny autorskými právy, všechna práva vyhrazena.