Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Metaphors as design points for collaboration 2012
1. Architectural, Spatial, and
Navigational Metaphors as Design
Points for Collaboration
John “Boz” Handy-Bosma, Ph.D.
Chief Architect for Collaboration, IBM Office of the CIO
For KM Chicago, May 8, 2012
• May 8, 2012
5. 5
MOSFET Architecture, scaling recipes,
and Moore’s Law
If scaled by Constant Decreased by:
K
Dimensions Circuit delay:
Voltages K Power/circuit:
K
K
Doping levels Power delay
product:
6. Recipes in Dennard's Scaling Theory
Factors maintained in constant ratio Predictable outcomes on figures of merit
Consistent inputs in proportion
Not classical power laws Figure of merit: a quantity
characterizing performance
Multiple factors
Used for benchmarking and
Relationships among comparisons
factors e.g.; clock speed in CPU,
wicking factor in fabrics
No claims about
development in relation to Consistent measurement
time Related figures held to
constant performance, not
How to achieve ratios is
degraded
not addressed
6
7. Where to look for scaling principles?
(Answer: where things are clogged or crowded)
7
8. An approach Identify practical recipes for improving Collaboration and Search. Use these as input to
decisions on architecture and design.
Consistent improvement in specific factors
Factors maintained in constant Predictable outcomes in figures of merit
ratio(roughly)
- in proportion -
Recipes enable balanced improvement in search, collaboration,
Multiple factors: and metrics
•Precision and recall
• Wayfinding (e.g.; navigating, searching, sorting,
filtering) •Content and metadata
• Information production (e.g.; quality and quantity •Adoption and Use
of authoring, tagging, publishing)
Experimentation allows for measurement and improvement on
key measures – but it is important to identify potential trade-offs in
• Bidirectional (e.g.; reciprocal networking among figures of merit resulting from technical and social factors:
participants)
Serial navigation (similar to Fitt's Law)
Relationships: mutually reinforcing, mutually impinging,
exponential Impact of follower models on signal-to-noise ratio of
communication
Factors are expressed via specific solutions as used in
the field
8
9. Wayfinding and Isovist: How is search relevance
measured?
Key Term Definition Test for performance using known corpora
and results (e.g.; Trec)
Relevance A subjective measure of whether a
document in a search result answers Typically uses a single query and
a query response, rather than a series of
interactions between users and search
Precision A measure of the percentage of
documents in a result list that answer
engine
a query
Geared toward top of results list
Recall A measure of the percentage of
documents in a result list relevative
But traditional approaches are not
to all documents in a collection
sufficient to measure relevance of results,
Pertinence A subjective measure of whether a where relevance is determined by social
document in a search result answers
a query (in light of previous interaction and collaboration outcomes!
knowledge or experience)
Aboutness The subjects and topics conveyed by
a document or query
Isovist Pertinent items visible | not visible at
any given point in a navigational
sequence
9
10.
11. 11
Example: What aspects of metadata
facilitate collaboration?
Collaboration capability Metadata features
Integrating disparate bodies of content from - Incorporate global and local extensions to
multiple sources / communities vocabulary
-- Query modification to allow lateral navigation
-- Matching on shared interests
Team Coordination - Content previews, review and approval,
collaborative workflow
- Tagging at group level
- Metadata suggestions
Positive network effects from sharing in social - Social Tagging and Bookmarking
channels - Rankings and ratings
- Clickstream analysis for ranking
Knowledge Elicitation - Query expansion a) Conditional metadata, b) Did
you mean?
- Tag notifications
Facilitate collaboration among disparate language - Unique and mapped display values; e.g.; Social
comunities Authority
12. Example: when is metadata search
helpful to collaboration?
When metadata search? When not metadata search?
✔ Multiple set membership for searchables ✗ Precise results can be obtained without
✔ Sufficient completeness and quality of metadata
classification scheme
✗ When metadata leads to undesirable
✔ Adequate accuracy of categorization
phenomena such as conjunction search,
✔ Leads to improved effective precision and
time to find serial navigation, or error propagation
Often assumed, but questionable:
? That a single large corpus is to be searched
? That metadata require hierarchical taxonomy with many classifiers
? That agreement on taxonomy is needed
? That searches are for documents (as opposed to collections of
documents, parts of documents, people, facts, etc.)
? That metadata operations only involve “anding” on attributes to find
instances
12
13. Measuring effective precision of
metadata search
Sequence
• Log sequence of user actions in a
Privacy- Clickstream search session (queries, metadata
preserving repository selections, links)
cookies • Work backward from a known result
(document click, download, print, tag,
Search bookmark, notify, rate, exit)
queries • Establish influence of each step in
Segmentation sequence on ranking of document(s)
Analysis that elicited that result (via rankings
database
Clickstream in results list)
data • Query by segments of interest using
aggregated data
Survey and Survey and
ratings info ratings
repositories
Example: Is stemming improving the
search results? Method: A-B tests
using stemming, sample measures of
search precision
13
14. 1. Configure
2. Observe
Optimization Cycle
Practices
practice
and Tools
7. Transition Variables to 3. Evaluate
Constants Bottlenecks
6. Measure 4. Propose New
Outcomes Variables
5. Build new