Ce diaporama a bien été signalé.
Nous utilisons votre profil LinkedIn et vos données d’activité pour vous proposer des publicités personnalisées et pertinentes. Vous pouvez changer vos préférences de publicités à tout moment.
Workshop
Quantifying Error in Training Data for Mapping and
Monitoring the Earth System
January 8-9, 2019
1 / 17
Entering Golden Age of EO
high resolution imagery, computing power, algorithm
performance
training data requirements
2 / 17
www.coursera.org/lecture/neural-networks-deep-learning/)
3 / 17
neuralnetworksanddeeplearning.com/chap3.html
4 / 17
The Problem
Training (validation) relies heavily on image interpretation
Data quality rarely assessed
That can cause probl...
6 / 17
7 / 17
8 / 17
A review of top 30 land-cover mapping papers
9 / 17
Training/reference data often taken at face value
But they can have substantial error
10 / 17
How does that error propagate?
Existing work provides some guide, e.g. study over South Africa
11 / 17
12 / 17
13 / 17
Missing pieces
How to account for training/reference error in map
accuracy?
Need accuracy metric(s) that explicitly factor...
Workshop Goals
Create that new accuracy metric
Summarize current state of knowledge
How error quantified?
How does it affe...
Workshop structure
Case studies
Working groups
Group 1: Error and its quantification:
Day 1: State of knowledge in account...
Post-workshop
Working paper/pre-print
Peer-review journal
17 / 17
Prochain SlideShare
Chargement dans…5
×

Workshop: Quantifying Error in Training Data for Mapping and Monitoring the Earth System

60 vues

Publié le

Quantifying Error in Training Data for Mapping and Monitoring the Earth System - A Workshop on “Quantifying Error in Training Data for Mapping and Monitoring the Earth System” was held on January 8-9, 2019 at Clark University, with support from Omidyar Network’s Property Rights Initiative, now PlaceFund.

Publié dans : Technologie
  • Soyez le premier à commenter

  • Soyez le premier à aimer ceci

Workshop: Quantifying Error in Training Data for Mapping and Monitoring the Earth System

  1. 1. Workshop Quantifying Error in Training Data for Mapping and Monitoring the Earth System January 8-9, 2019 1 / 17
  2. 2. Entering Golden Age of EO high resolution imagery, computing power, algorithm performance training data requirements 2 / 17
  3. 3. www.coursera.org/lecture/neural-networks-deep-learning/) 3 / 17
  4. 4. neuralnetworksanddeeplearning.com/chap3.html 4 / 17
  5. 5. The Problem Training (validation) relies heavily on image interpretation Data quality rarely assessed That can cause problems downstream 5 / 17
  6. 6. 6 / 17
  7. 7. 7 / 17
  8. 8. 8 / 17
  9. 9. A review of top 30 land-cover mapping papers 9 / 17
  10. 10. Training/reference data often taken at face value But they can have substantial error 10 / 17
  11. 11. How does that error propagate? Existing work provides some guide, e.g. study over South Africa 11 / 17
  12. 12. 12 / 17
  13. 13. 13 / 17
  14. 14. Missing pieces How to account for training/reference error in map accuracy? Need accuracy metric(s) that explicitly factor(s) this in Existing studies as starting point (thanks Gil!) Carlotto, M. J. (2009). Effect of errors in ground truth on classification accuracy. International Journal of Remote Sensing, 30, 4831-4849. Foody, G.M. (2010). Assessing the accuracy of land cover change with imperfect ground reference data. Remote Sensing of Environment, 114, 2271–2285. McRoberts, R. E., et al (2018). The effects of imperfect reference data on remote sensing-assisted estimators of land cover class proportions, ISPRS Journal of Photogrammetry and Remote Sensing, 142, 292-300. Pontius, R.G., Petrova, S.H., 2010. Assessing a predictive model of land change using uncertain data. Environmental Modelling & Software 25, 299–309. 14 / 17
  15. 15. Workshop Goals Create that new accuracy metric Summarize current state of knowledge How error quantified? How does it affect different training types & mapping methods? How does it affect end users/policy? Identify new sources of error Define best practices and standards Acceptable standard of truth???? Publish the results for broader community 15 / 17
  16. 16. Workshop structure Case studies Working groups Group 1: Error and its quantification: Day 1: State of knowledge in accounting for map error Day 2: Prototype/select accuracy metric Group 2: Training data error: Day 1: Taxonomy of training data and their error potential Day 2: Best practices in accounting for training data error Group 3: Error implications: Day 1: Impacts of errors on map use/interpretation Day 2: How to communicate error and map usability Synthesize/distill 16 / 17
  17. 17. Post-workshop Working paper/pre-print Peer-review journal 17 / 17

×