Ce diaporama a bien été signalé.
Nous utilisons votre profil LinkedIn et vos données d’activité pour vous proposer des publicités personnalisées et pertinentes. Vous pouvez changer vos préférences de publicités à tout moment.

MongoDB World 2019: The Sights (and Smells) of a Bad Query

131 vues

Publié le

“Why is MongoDB so slow?” you may ask yourself on occasion. You’ve created indexes, you’ve learned how to use the aggregation pipeline. What the heck? Could it be your queries? This talk will outline what tools are at your disposal (both in MongoDB Atlas and in MongoDB server) to identify inefficient queries.

Publié dans : Technologie
  • DOWNLOAD FULL MOVIE, INTO AVAILABLE FORMAT ......................................................................................................................... ......................................................................................................................... ,DOWNLOAD FULL. MOVIE 4K,FHD,HD,480P here { https://tinyurl.com/yybdfxwh }
       Répondre 
    Voulez-vous vraiment ?  Oui  Non
    Votre message apparaîtra ici
  • Soyez le premier à aimer ceci

MongoDB World 2019: The Sights (and Smells) of a Bad Query

  1. 1. Alex Bevilacqua The Sights (and Smells) of a Bad Query @alexbevi
  2. 2. INTRODUCTION { context: “speaker”, smell: “pleasant” }
  3. 3. Alex Bevilacqua Technical Services Engineer, MongoDB
  4. 4. IT’S ME … MARIO! ALEX! • Application Developer • Development Lead • System Architect • Technical Services Engineer • Author • MongoDB Fanboy
  5. 5. Sights?
  6. 6. What we’ll be looking at • Generic looks at some queries • Introduction to some tooling • High level discussion • Not focusing on “solutions”
  7. 7. Smells?
  8. 8. What stinks? https://en.wikipedia.org/wiki/Code_smell A code smell is any characteristic in the source code of a program that possibly indicates a deeper problem. Code smells are usually not bugs; they are not technically incorrect and do not prevent the program from functioning. Instead, they indicate weaknesses in design that may slow down development or increase the risk of bugs or failures in the future.
  9. 9. Scenario • Atlas M10 Cluster • 2-3M documents • Generated and imported using a template (mgeneratejs) • Ran some adhoc queries
  10. 10. SIMPLE QUERY { context: “find”, smell: “good?” }
  11. 11. Our Dataset
  12. 12. What’re we looking for? db.users.find({ age: 38 })
  13. 13. Let’s find some data! • Found 53,516 Results • Took a few seconds longer than we’d like ☹
  14. 14. Who’s seen this?
  15. 15. ALERTS =
  16. 16. Let’s see that again … as an Explain Plan • Query time higher than we would expect • COLLSCAN • Large number of documents scanned (entire collection!)
  17. 17. COLLSCAN =
  18. 18. Adding an Index db.users.createIndex({ age: 1 })
  19. 19. Explain that again please • Performance greatly improved • IXSCAN • Fewer documents scanned
  20. 20. QUERYING MULTIPLE FIELDS AND SORTING { context: { $in: [“find”, “sort”] }, smell: “good?” }
  21. 21. What’re we looking for? db.users.find({ age: { $gt: 25 }, “address.state”: “UT” }).sort({ name: 1 })
  22. 22. Let’s find some data – in order this time • Looking for everyone older than 25 from Utah, sorted by name • IXSCAN …. But slow • Far more keys and documents examined than returned (we’ll come back to this one) • In memory sort!
  23. 23. SORT =
  24. 24. We know we need an index!
  25. 25. Let’s try that again …
  26. 26. What’s the explain telling us? • IXSCAN using the expected index • Faster execution • Same number of documents scanned and returned … • In memory sort???
  27. 27. (E)quality – (S)ort – (R)ange db.users.find({ age: { $gt: 25 }, ‘address.state’: ‘UT’ }).sort({ name: 1 }) Rang e Equality Sort
  28. 28. Non-ESR Compound = Indexes
  29. 29. METRICS AND LOGS AND CHARTS … OH MY! { context: “analysis”, smell: “good?” }
  30. 30. What can the Atlas Metrics tell us? • Sort operation that couldn’t use an index • Ratio of documents scanned to the number of documents returned • Increased average read/write time per operation
  31. 31. Sharp rise in one or = more metrics
  32. 32. Remember this?
  33. 33. Check the Performance Advisor
  34. 34. Get your logs
  35. 35. Plotting queries by execution time ... COMMAND [conn416] command data.users command: aggregate { aggregate: "users", pipeline: [ { $match: {} }, { $skip: 0 }, { $group: { _id: null, n: { $sum: 1 } } } ], ... planSummary: COLLSCAN keysExamined:0 docsExamined:2585100 cursorExhausted:1 numYields:20196 nreturned:1 reslen:243 ... 1892ms mlogfilter *.log --markers none | mplotqueries --logscale
  36. 36. Queries slower than slowms =
  37. 37. Plotting queries by docsExamined/n … COMMAND [conn1136] command data.users … { find: "users", filter: { age: { $gt: 25.0 }, address.state: "UT" }, sort: { name: 1.0 }, … planSummary: IXSCAN { age: 1 } cursorid:73909299301 keysExamined:2154391 docsExamined:2154391 hasSortStage:1 numYields:17029 nreturned:101 … 94212ms mlogfilter *.log --markers none | mplotqueries --type docsExamined/n
  38. 38. Large docsExamined/n =
  39. 39. What can stink? • Alerts (from Atlas, Ops Manager, Cloud Manager) • COLLSCAN (in logs or explain plans) • SORT (in explain plans or in logs as hasSortStage) • Non-ESR Compound Indexes • Sharp rise in one or more metrics (in Atlas, Ops Manager or Cloud Manager) • Queries slower than slowms • Large docsExamined/n ratio
  40. 40. Links Notes/links from this talk available at: http://bit.ly/ab-mdbw19 alexbevi alexbevi @alexbevi
  41. 41. THANK YOU FOR ATTENDING! QUESTIONS?

×