꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
2nd MAM Survey (DECLERCQ)
1. FIAT/IFTA MAM Survey 2017
Highlights from the results analysis
FIAT/IFTA World Conference, Mexico City 2017 – Brecht Declercq (VIAA) – 21.10.2017
2. Why the MMC MAM Surveys?
Archive world is changing fast
You find cutting edge archive knowledge almost only in
the archives
FIAT/IFTA members are in search of answers
MMC collects, processes and distributes know-how
2nd Survey since the one in 2015
7. Broadcaster’s archives 40 71%
Regional / national AVarchives 14 25%
Others 2 4%
MAM SURVEY 2017 RESPONDENTS
Number of responses 56
Number of unique archives 53
9. Metadata creation methods
classification
• manually: manual annotation in the archive
• harvesting: importing existing data from production
• mining: algorithms generate new meaning from input data
10. 96%
78%
67%
54%
48%
31%
24%
15%
9%
60%
39%
50%
23%
39%
36%
21%
25%
32%
Manual, internal
Import from production: other systems
Import from production: planning system
Copy and paste from online sources
Import from production: closed captioning
Manual, external commercial service
Automatic feature extraction: speech-to-text
Manual, external non-commercial
Automatic feature extraction: other
technologies
how many respondents use this method at all? for which share of your items on average? (non-users excl.)
Metadata creation methods
26. Why don’t you use speech-to-text (yet)?
23%
13%
13%
11%
9%
13%
9%
4%
5%
11%
Not good enough for our main languages
How to integrate it in MAM architecture?
Costs will outweigh benefits
Our MAM doesn't allow it
Implementation is busy
Others: in some stage of preparation
Others: no MAM yet / too early
Others: we use subtitling, so low priority
Others
(We have it already)
31. Combined search details
Database Sources:
Anonymous1
Anonymous2
Anonymous3
Anonymous4
Anonymous5
Anonymous6
Anonymous7
Anonymous8
Anonymous9
Anonymous10
Anonymous11
Anonymous12
Anonymous13
Anonymous14
4with3Sources
7with2Sources
Television / Video Archive 14 x x x x x x x x x x x x x x
Radio / Audio Archive 13 x x x x x x x x x x x x x
Photo, image, graphics database 9 x x x x x x x x x
Document Management System 5 x x x x x
Newspapers 7 x x x x x x x
Television License Database 5 x x x x x
Television Planning System 5 x x x x x
Radio License Database 5 x x x x x
Radio Planning System 4 x x x x
MAM 1 x
Newswires, Agency Information 2 x x
other 2 x x
Administrative Data 1 x
Books and Manuscripts 1 x
Web-Archive 1 x
Number of Databases: 9 7 7 7 5 5 5 5 5 4 4 4 4 4 3 2
happy with the search possibilities as they are now*
35. Metadata for target groups other than production?
45%
36%
50%
43%
16%
36%
50%
22%
34%
14%
28%
5%
14%
7%
broadcaster's
archives
national/regional
Avarchives
others
total
no we don't yes, from the beginning yes, from later on other
36. Adaptations on metadata for other target groups?
35%
19%
19%
11%
43%
we adapt the vocabulary used
we adapt the metadata categories
we decrease the level of detail
other
no we don't adapt our metadata
37. THANK YOU!
FIAT/IFTA MMC & PMC Members
Eléonore Alquier (INA)
Elena Brodie-Kusa (EBK)
Anne Couteux (INA)
Brecht Declercq (VIAA)
Gerhard Stanz (ORF)
More and deeper results in de publication after this conference!
Notes de l'éditeur
A little disclaimer...
By Benjamin Disraeli
I will today be presenting you: LIES, DAMN LIES... and STATISTICS
Categorizing metadata creation methods can be done in many ways.
You can rank them according to:
when it happens during the media lifecycle: before, during, after the production, during re-use etc.
who creates the metadata: the producer, the archivist, the media consumer etc.
with which kind of tools: production tools, automatic analysis tools, the eye, the brain and the hands, …
Etc.
For this survey we have chosen to work mainly with these three categories: manual, harvesting and mining.
QUESTION 1.
Manual, internal metadata annotation by archivists / catalogers is – no surprise – the most popular method, with 57% of items on average cataloged via this method.
Import from all kinds of production methods come next.
Interesting to know is also that no less than 1 out of 3 archives calls for the help of an external, commercial service. And if such services are used, they are used for quite a bit of items: those who use it, use it for 36% of their items on average.
Speech-to-text is already in use at 1 out of 4 archives. The ones who use it, do so for a bit more than 1 out of 5 of their items.
And one last one: 1 out of 10 respondents indicated they use already other automatic feature extraction tools. But if they do so, they are quite popular, because they’re used for 1 out of 3 of their items on a daily basis on average.
STT: 1/5 is relatief!!
QUESTION B2.
Harvesting vs. Mining
Currently round 40% of the relevant metadata is imported into archive systems
Even though some institutions regard 100% achievable, the average estimate still is 60% of the necessary metadata to be importable
Some words on reasons (technical / organisational)
- Image Description – still manual for quite some time
List of Metadata-Elements and Challenges - later in the Publication
NEW B3
QUESTION D12.
16 Answers
Category “Sonstige” = other ist the largest, which means ..
… there seem to be by far more than were known to us, as we were creating the question
Probably an interesting task for the MMC to make a list of such external sources in order to exchange their strength and weaknesses an general tipps how to use them.
Volunteers step forward.
QUESTION D10.
Already one third, more than I would have expected
QUESTION 6.
QUESTION 9.
How happy you are with the quality of the metadata put in by the production staff is very scattered, to say the least.
On average you are judging them with a score of 5,48 out of 10.
QUESTION 9.
Also the delay of the metadata put in by the production staff is very scattered.
On average you are judging them with a score of 5,70 out of 10.
QUESTION E14.
Algorithms that we know of – 5 questions indicating an always higher/lower level of approval.
If 22 has really been understood is questionable – probably confused with simple 4:3 vs 16:9 and does not imply anamourphous 4:3 or detecting true 4:3 LBOX ratio (10 % = 9 Archives)
Explanation of the “Mining Algorithms”
This time Speech to Text as the MAX
Archivists are Optimists (open towards the possibilities of technology)
Proof that 15. and 23 are indeed concisiosly choosen
Speech to text as most critically seen !! Antagonisms this Area. Probably the expectatiosn where to high.
QUESTION 26.
6 out of 54 respondents use speech-to-text in the archive.
It is clear that they consider it mostly a tool for as well audio as for video.
QUESTION 27.
QUESTION 28.
The overall judgment about the quality of the speech-to-text result is not so bad after all,
although one archive had a bad experience.
The average result is a fair 6,83.
QUESTION 29.
As the recognition of proper nouns and entities is often more problematic, we’ve asked specifically about that.
Also here the results are not so negative except in one case.
Bu with an average of 6,66 there still seems to be a gap between what archivists would like and what the industry can deliver.
QUESTION 31.
Distribution between manual an automated (Harvesting AND Mining) annotation.
QUESTION H34.
Again this 30 to 40% remaining manual
QUESTION G42.
50% have no distributed Search Possibility
What does the relatively high figure of almost 20% not applicables mean (small “one database” institutions?)
QUESTION G43.
Of 25 Institutions that can search in multiple databases at once, one third has more than 4 Databases searchable
12 are satisfied with their multiple search capabilities, the others want some more databases
Distribution between manual an automated (Harvesting AND Mining) annotation.
QUESTION 36.
QUESTION 38.
QUESTION 40.
An interesting conclusion here, is that broadcasters take into account the needs of other user groups than audiovisual production EXACTLY AS MUCH AS NATIONAL / REGIONAL ARCHIVES DO. The only difference is that the broadcasters do it only at a later stage, while the national / regional archives tend more to do it from the beginning.
In general 1 out of 2 archives DOES take into account the metadata needs of other target groups.