This presentation will demonstrate how you can use the aggregation pipeline with MongoDB similar to how you would use GROUP BY in SQL and the new stage operators coming 3.4. MongoDB’s Aggregation Framework has many operators that give you the ability to get more value out of your data, discover usage patterns within your data, or use the Aggregation Framework to power your application. Considerations regarding version, indexing, operators, and saving the output will be reviewed.
5. What is the Aggregation Pipeline?
www.objectrocket.com
5
A framework for data visualization and or manipulation using one ore multiple stages in
order (i.e. pipeline).
• Framework - Allows for the transformation of data through stages, the result can be
an array, cursor, or even a collection
• Visualization – Data transformation is not required at all times, this framework can
be used for basic counts, summations, and grouping
• Manipulation – Using stages the documents can be transformed as they pass
through each stage, this prepares the data for the next stage or the final result set
• Output – The result can be iterated over using a cursor or saved to a collection
within the same database
• Expandable – New stages and operators are added with each major version and in
3.4 views leverage the aggregation framework
7. Common Stages
www.objectrocket.com
7
$match
$group
$project
$sort
$limit
$unwind
$out
- Filter (reduce) the number of documents that is passed to the next stage
- Group documents by a distinct key, the key can also be a compound key
- Pass documents with specific fields or newly computed fields to the next stage
- Returns the input documents in sorted order
- Limit the number of documents for the next stage
- Splits an array into into one document for each element in the array
- As the last stage, creates/replaces an unsharded collection with the input documents
8. Common Operators
www.objectrocket.com
8
Group Operators
$sum
$avg
$max
$min
$first
$last
Date Operators
$year
$month
$week
$hour
$minute
$second
Arithmetic Operators
$abs
$add
$multiply
$subtract
$trunc
Operators that return a value based on document data.
Operators that return true or false based on document data.
Comparison Operators
$eq $gt
$lt $gte
$lte
Boolean Operators
$and
$or
9. Aggregate()
www.objectrocket.com
9
db.changelog.aggregate([
{$match : {"details.note":"success", "details.step 6 of 6": {$gte:0}}},
{$sort: {time:-1}},
{$limit: 100},
{$project : {'totalTime' : { '$add' : [ "$details.step 1 of 6","$details.step 2 of 6",
"$details.step 3 of 6","$details.step 4 of 6",
"$details.step 5 of 6","$details.step 6 of 6" ] } } },
{$group: {_id: null, averageTotalTime: {$avg: "$totalTime"} } }
]);
Collection
Purpose: Return the average number of milliseconds to move a chunk for the last one
hundred moves.
10. $match
www.objectrocket.com
10
db.changelog.aggregate([
{$match : {"details.note":"success", "details.step 6 of 6": {$gte:0}}},
{$sort: {time:-1}},
{$limit: 100},
{$project : {'totalTime' : { '$add' : [ "$details.step 1 of 6","$details.step 2 of 6",
"$details.step 3 of 6","$details.step 4 of 6",
"$details.step 5 of 6","$details.step 6 of 6" ] } } },
{$group: {_id: null, averageTotalTime: {$avg: "$totalTime"} } }
]);
Stage 1
Purpose: In the first stage filter only the chunks that moved successfully.
Comparison
Operator
11. $sort
www.objectrocket.com
11
db.changelog.aggregate([
{$match : {"details.note":"success", "details.step 6 of 6": {$gte:0}}},
{$sort: {time:-1}},
{$limit: 100},
{$project : {'totalTime' : { '$add' : [ "$details.step 1 of 6","$details.step 2 of 6",
"$details.step 3 of 6","$details.step 4 of 6",
"$details.step 5 of 6","$details.step 6 of 6" ] } } },
{$group: {_id: null, averageTotalTime: {$avg: "$totalTime"} } }
]);
Stage 2
Purpose: Sort descending so we are prioritizing the most recent moved chunks.
12. $limit
www.objectrocket.com
12
db.changelog.aggregate([
{$match : {"details.note":"success", "details.step 6 of 6": {$gte:0}}},
{$sort: {time:-1}},
{$limit: 100},
{$project : {'totalTime' : { '$add' : [ "$details.step 1 of 6","$details.step 2 of 6",
"$details.step 3 of 6","$details.step 4 of 6",
"$details.step 5 of 6","$details.step 6 of 6" ] } } },
{$group: {_id: null, averageTotalTime: {$avg: "$totalTime"} } }
]);
Stage 3
Purpose: Further reduce the number of moves being analyzed because time to move a
chunk varies by chunk and collection.
13. $project
www.objectrocket.com
13
db.changelog.aggregate([
{$match : {"details.note":"success", "details.step 6 of 6": {$gte:0}}},
{$sort: {time:-1}},
{$limit: 100},
{$project : {'totalTime' : { '$add' : [ "$details.step 1 of 6","$details.step 2 of 6",
"$details.step 3 of 6","$details.step 4 of 6",
"$details.step 5 of 6","$details.step 6 of 6" ] } } },
{$group: {_id: null, averageTotalTime: {$avg: "$totalTime"} } }
]);
Stage 4
Purpose: For each moveChunk document project the sum of the steps to the next stage.
Arithmetic
Operator
14. $group
www.objectrocket.com
14
db.changelog.aggregate([
{$match : {"details.note":"success", "details.step 6 of 6": {$gte:0}}},
{$sort: {time:-1}},
{$limit: 100},
{$project : {'totalTime' : { '$add' : [ "$details.step 1 of 6","$details.step 2 of 6",
"$details.step 3 of 6","$details.step 4 of 6",
"$details.step 5 of 6","$details.step 6 of 6" ] } } },
{$group: {_id: null, averageTotalTime: {$avg: "$totalTime"} } }
]);
Stage 5
Purpose: Return the average number of milliseconds to move a chunk for the last one
hundred moves.
Arithmetic
Operator
16. Projections
www.objectrocket.com
16
When using $project stage Mongo will read and pass less data to the next stage. By doing this it will
require less CPU, RAM, and reduce the disk IO to process the aggregation.
db.jobs.aggregate([
{$match : {"type": "import"}},
{$sort: {"cluster": 1}},
{$project : { cluster: 1, type:1, seconds:1, _id: 0} },
{$group: {_id: {cluster: "$cluster", type: "$type"}, avgExecTime: {$avg: "$seconds"} } }
]);
Stage 3
By default Mongo will try to determine if a subset of fields are required, if so it will request only those
fields and optimize the stage for you.
17. Sequencing
www.objectrocket.com
17
When stages can be ordered more efficiently, Mongo will reorder those stages for you to improve
execution time.
db.jobs.aggregate([
{$sort: {"cluster": 1}},
{$match : {"type": "import"}},
{$project : { cluster: 1, type:1, seconds:1, _id: 0} },
{$group: {_id: {cluster: "$cluster", type: "$type"}, avgExecTime: {$avg: "$seconds"} } }
]);
By filtering documents first the number of documents to be sorted is reduced.
18. Sequencing
www.objectrocket.com
18
When stages can be ordered more efficiently, Mongo will reorder those stages for you to improve
execution time.
db.jobs.aggregate([
{$match : {"type": "import"}},
{$sort: {"cluster": 1}},
{$project : { cluster: 1, type:1, seconds:1} },
{$group: {_id: {cluster: "$cluster", type: "$type"}, avgExecTime: {$avg: "$seconds"} } }
]);
In addition to sequence optimizations Mongo can also coalesce stages, for example a $match stage
followed by another $match will become one stage. A full list of sequence and coalesce optimizations
can be viewed at Aggregation Pipeline Optimization.
19. Indexing and Data Merging
www.objectrocket.com
19
Only two stages have the ability to utilize indexes, the $match stage and the $sort stage. Starting in
version 3.2 an index can cover an aggregation. Like find() you can generate an explain plan for an
aggregation to view a more detail execution plan.
To use an index, these stages must be the first stages in the pipeline.
Also released in version 3.2 for aggregations:
• Data that does not require the primary shard no longer has to be merged on the primary shard.
• Aggregations that include the shard key in the $match stage and don’t require data from other
shards can execute entirely on the target shard.
20. Memory
www.objectrocket.com
20
Stages have a limit of 100MB of RAM, this restriction is the most common restriction one encounters
when using the aggregation framework.
To exceed this limitation use the allowDiskUse option to allow stages like $sort to use temporary files.
db.jobs.aggregate([
{$match : {"type": "import"}},
{$sort: {"cluster": 1}},
{$project : { cluster: 1, type:1, seconds:1} },
{$group: {_id: {cluster: "$cluster", type: "$type"}, avgExecTime: {$avg: "$seconds"} } }
], {allowDiskUse: true});
This option should be used with caution in production due to added resource consumption.
22. Recursive Search
www.objectrocket.com
22
Recursively search a collection using $graphLookup. This stage in the pipeline takes input from
either the collection or a previous stage (e.g. $match).
{
$graphLookup: {
from: "users",
startWith: "$connections",
connectFromField: "connections",
connectToField: "name",
as: "connections",
}
}
Considerations
• This stage is limited to 100M of
RAM even with allowDiskUse
option
• maxDepth of zero is equivilent to
$lookup
• Collation must be consistent when
involving multiple views
28. Views
www.objectrocket.com
28
A read-only object that can be queried like the underlying collection. A view is created
using an aggregation pipeline and can be used to transform data or limit data access
from another collection.
• Computed on demand for each read operation
• Use indexes from the underlying collection
• Names are immutable, to change the name drop and recreate
• Can be created on sharded collections
• Are listed as collections in getCollectionNames()
• Allows for more granular access controls than RBAC via views
33. www.objectrocket.com
33
We’re Hiring!
Looking to join a dynamic & innovative
team?
Justine is here at Percona Live 2017,
Ask to speak with her!
Reach out directly to our Recruiter at
justine.marmolejo@rackspace.com
34. Thank you!
Address:
401 Congress Ave Suite 1950
Austin, TX 78701
Support:
1-800-961-4454
Sales:
1-888-440-3242
www.objectrocket.com
34