2
Disclaimer
During the course of this presentation, we may make forward looking statements regarding future events
or the expected performance of the company. We caution you that such statements reflect our current
expectations and estimates based on factors currently known to us and that actual events or results
could differ materially. For important factors that may cause actual results to differ from those contained
in our forward-looking statements, please review our filings with the SEC. The forward-looking
statements made in the this presentation are being made as of the time and date of its live presentation.
If reviewed after its live presentation, this presentation may not contain current or accurate information.
We do not assume any obligation to update any forward looking statements we may make.
In addition, any information about our roadmap outlines our general product direction and is subject to
change at any time without notice. It is for informational purposes only and shall not, be incorporated
into any contract or other commitment. Splunk undertakes no obligation either to develop the features
or functionality described or to include any such feature or functionality in a future release.
8
Machine Learning 101: What is it?
• Machine Learning is a process for generalizing from examples
– Examples = example or “training” data
– Generalizing = building “statistical models” to capture correlations
– Process = never quite done, we keep validating & refitting models for
increasing accuracy
• Simple Machine Learning workflow:
– Explore data
– FIT models based on data
– APPLY models in production
– Keep validating models
“All models are wrong, but some are useful.”
9
Three Types of Machine Learning
1. Supervised Learning: generalizing from labeled data
OR?
Gather data:
• Dimensions
• Stem Length
• Color
• Etc.
10
Three Types of Machine Learning
2. Unsupervised Learning: generalizing from unlabeled data
Will my home sell?
Gather data:
• Square feet
• Levels
• Parks nearby
• Schools
• Zipcode
• Etc.
11
Three Types of Machine Learning
3. Reinforcement Learning: generalizing from rewards in time
Recommendation Engines
14
IT Ops: Predictive Maintenance
1. Get resource usage data (CPU, latency, outage reports)
2. Explore data, and fit predictive models on past / real-time data
3. Apply & validate models until predictions are accurate
4. Forecast resource saturation, demand & usage
5. Surface incidents to IT Ops, who INVESTIGATES & ACTS
Problem: Network outages and truck rolls cause big time & money expense
Solution: Build predictive model to forecast outage scenarios, act pre-emptively & learn
15
Security: Find Insider Threats
Problem: Security breaches cause big time & money expense
Solution: Build predictive model to forecast threat scenarios, act pre-emptively & learn
1. Get security data (data transfers, authentication, incidents)
2. Explore data, and fit predictive models on past / real-time data
3. Apply & validate models until predictions are accurate
4. Forecast abnormal behavior, risk scores & notable events
5. Surface incidents to Security Ops, who INVESTIGATES & ACTS
16
Business Analytics: Predict Customer Churn
Problem: Customer churn causes big time & money expense
Solution: Build predictive model to forecast possible churn, act pre-emptively & learn
1. Get customer data (set-top boxes, web logs, transaction history)
2. Explore data, and fit predictive models on past / real-time data
3. Apply & validate models until predictions are accurate
4. Identify customers likely to churn
5. Surface incidents to Business Ops, who INVESTIGATES & ACTS
17
Summary: The Machine Learning Process
Problem: <Stuff in the world> causes big time & money expense
Solution: Build predictive model to forecast <possible incidents>, act pre-emptively & learn
1. Get all relevant data to problem
2. Explore data, and fit predictive models on past / real-time data
3. Apply & validate models until predictions are accurate
4. Forecast KPIs & metrics associated to use case
5. Surface incidents to X Ops, who INVESTIGATES & ACTS
Operationalize
19
Analysts Business Users
1. Get Data & Find Decision-Makers!
1
IT Users
ODBC
SDK
API
DB Connect
Look-Ups
Ad Hoc
Search
Monitor
and Alert
Reports /
Analyze
Custom
Dashboards
GPS /
Cellular
Devices Networks Hadoop
Servers Applications Online
Shopping Carts
Analysts Business Users
Structured Data Sources
CRM ERP HR Billing Product Finance
Data Warehouse
Clickstreams
20
2. Explore Data, Build Searches & Dashboards
• Start with the Exploratory Data Analysis phase
– “80% of data science is sourcing, cleaning, and preparing the data”
• For each data source, build “data diagnostic” dashboard
– What’s interesting? Throw up some basic charts.
– What’s relevant for this use case?
– Any anomalies? Are thresholds useful?
• Mix data streams & compute aggregates
– Compute KPIs & statistics w/ stats, eventstats, etc.
– Enrich data streams with useful structured data
– stats count by X Y – where X,Y from different sources
21
3. Get the ML Toolkit & Showcase App
• Get the App! https://splunkbase.splunk.com/app/2890
• Leverages Python for Scientific Computing (PSC) add-on:
– Open-source Python data science ecosystem
– NumPy, SciPy, scitkit-learn, pandas, statsmodels
• Showcase use cases: Hard Drive Failure, Server Power consumption,
Server Response Time, Application Usage
• Standard algorithms out of the box:
– Supervised: Logistic Regression, SVM, Linear Regression, Random Forest
– Unsupervised: KMeans, DBSCAN, Spectral Clustering
• Implement one of 300+ algorithms by editing Python scripts
22
4. Fit, Apply & Validate Models
• Machine Learning SPL – New grammar for doing ML in Splunk
• fit – fit models based on training data
– [training data] | fit LinearRegression costly_KPI
from feature1 feature2 feature3 into my_model
• apply – apply models on testing and production data
– [testing/production data] | apply my_model
• Validate Your Model (The Hard Part)
– Why hard? Because statistics is hard! Also: model error ≠ real world risk.
– Analyze residuals, mean-square error, goodness of fit, cross-validate, etc.
– Take Splunk’s Analytics & Data Science Education course
23
5. Operationalize Your Models
• Remember the ML Process:
1. Get data
2. Explore data & fit models
3. Apply & validate models
4. Forecast KPIs
5. Surface incidents to Ops team
• Then operationalize: feed back Ops analysis to data inputs, repeat
• Lots of hard work & stats, but lots of value will come out.
Operationalize
25
Sneak Peak Recap: What’s new in GA
• New Algorithms (Random Forest, Lasso, Kernel PCA, and more…)
• More use cases to explore
• Support added for Search Head Clustering
• Removed 50k limit on model fitting
• Sampling for training/test data
• Guided ML via a ML Assistant aka Model / Query Builder
• Install on 6.4 Search Head
26
What Do I Do Next?
• Reach out to your Tech Team! We can help architect Machine Learning
(ML) workflows.
• Lots of ML commands in Core Splunk (predict, anomalydetection, stats)
• ML Toolkit & Showcase – available and free, ready to use .. GO!
• Splunk ITSI: Applied ML for ITOA use cases
– Manage 1000s of KPIs & alerts
– Adaptive Thresholding & Anomaly Detection
• Splunk UBA: Applied ML for Security
– Unsupervised learning of Users & Entities
– Surfaces Anomalies & Threats
• ML Customer Advisory Program:
– Connect with Product & Engineering teams - mlprogram@splunk.com
28
SEPT 26-29, 2016
WALT DISNEY WORLD, ORLANDO
SWAN AND DOLPHIN RESORTS
• 5000+ IT & Business Professionals
• 3 days of technical content
• 165+ sessions
• 80+ Customer Speakers
• 35+ Apps in Splunk Apps Showcase
• 75+ Technology Partners
• 1:1 networking: Ask The Experts and Security
Experts, Birds of a Feather and Chalk Talks
• NEW hands-on labs!
• Expanded show floor, Dashboards Control
Room & Clinic, and MORE!
The 7th Annual Splunk Worldwide Users’ Conference
PLUS Splunk University
• Three days: Sept 24-26, 2016
• Get Splunk Certified for FREE!
• Get CPE credits for CISSP, CAP, SSCP
• Save thousands on Splunk education!
Splunk handles the full continuum: past, present & future.
Q: Why do we need ML?
A: ML provides the “models” that we can use to make decisions about missing data. We need ML to predict & forecast possible future events based on historical & real-time data.
Splunk handles the full continuum: past, present & future.
Q: Why do we need ML?
A: ML provides the “models” that we can use to make decisions about missing data. We need ML to predict & forecast possible future events based on historical & real-time data.
Q: What is a statistical model?A: A model is a little copy of the world you can hold in your hands.
Formal: A model is a parametrized relationship between variables.
FITTING a model sets the parameters using feature variables & observed values
APPLYING a model fills in predicted values using feature variables
Image source: http://phdp.github.io/posts/2013-07-05-dtl.html
Supervised learning is where you have existing LABELS in the data to help you out.
Example: If you’re training a model for CUSTOMER CHURN, historically you know which customers stayed and which left. You can build a model to correlate historical churn with other features in the data. Then you can PREDICT churn for each customer based on everything they’re doing in real-time and have done in the past.
Unsupervised learning is where you have NO LABELS to help you out. You have to figure out patterns
Example: If you’re trying to do BEHAVIORAL ANALYTICS, you might just have a big confusing pile of IT & Security data to wade through. Unsupervised learning is the art & science of finding PATTERNS, BASELINES and ANOMALIES in the data. Once you understand all this (that’s hard!) you can try to predict possible INCIDENTS and THREATS.
Good ML involves FEEDBACK loops. Best bet is to incorporate INCIDENT RESPONSE data and learn from what analysts have done in the past.
[NEXT SLIDE: Reinforcement Learning]
Reinforcement Learning is basically Supervised Learning where LABELS = REWARDS, and there is a strong focus on TIME and FEEDBACK LOOPS. This is how you OPERATIONALIZE machine learning: by looping back results of analysis and workflow and LEARN from interactions with the world.
Rewards can be POSITIVE or NEGATIVE.
Image: The Leitner system is reinforcement learning for flashcards. Correct answers “advance” and accumulate more points. Incorrect answers go back to the beginning.
https://en.wikipedia.org/wiki/Leitner_system
Reinforcement learning is rooted in behavioral psychology. Humans & animals are hard-wired for rewards
https://en.wikipedia.org/wiki/Reinforcement_learning
Reinforcement learning lets us OPERATIONALIZE machine learning. When the machine recommends something to an analyst, the model can LEARN from the outcome of their work. Create a culture of REWARDS for your analytics team, not punishments.
The machines can/should learn from ALL the available data. You might have to build complex ML workflows. Want good Splunk admin to help architect.
Want VIRTUOUS CYCLE between human-machine interaction
Q: How is this slide similar to the previous one? (go back and forth)
A: The ML Process is the same, it’s just that the data & the operations teams are different. Also different will be the actual analysis in the middle, but the *process* of doing that analysis is the same.
The ML process is itself a generalization of the different use cases. ML spans domains!
The arrow means OPERATIONALIZE. Feed back incident data & other high-level analysis back into the ML Process. Keep exploring that data & fitting better models to align with reality. Loop Step #5 (Act) back to Step #1 (Data).
Before you do machine learning, you need DATA and DECISION-MAKERS. Walk before you can run! Start with useful data sources that can help people solve problems, and build basic dashboards correlating different things in the data.
This is called EXPLORATORY DATA ANALYSIS. Once you do that, THEN try to fit models based on what seems to correlate. Interviewing & iterating with decision-makers is key.
DATA: ML isn’t magic. You need good data to learn from.
DECISION-MAKERS: Once you find patterns, anomalies, etc., who are you going to deliver to them to? How do they want information presented? Emails? Dashboards? Incident tickets?
Walk before running! Precursor to building models & doing ML.
Image: OpenStreetMap logo, from Wikipedia. Creative Commons
Q: Why standalone SH?
A: Don’t want ML exploration & production to bring down other Splunk workloads
Can use standalone 6.3 SH with older version SH cluster & indexers.
PSC add-on is FREE. Go to ML App link above, and click Documentation. Links for all distros ()
Remember: Machine Learning is a PROCESS. Takes a lot of work & elbow grease to get from Exploratory Data Analysis to ML Models in Production.
Q: Why hard?
A1: Statistics is hard. Subtle questions re: model error & statistical assumptions. Remember: “All models are wrong, some are useful”
A2: Validation is also difficult because not everyone has the same requirements. For example, for some users false positives may be much more expensive than false negatives; for others, the opposite may be true. For some users, being 2X wrong is twice as bad as being X wrong; for others, there may be a non-linear relationship between error and badness.
Re: ML App v0.9. To be updated after new release. Time estimate: “soon, stay tuned!”
If you want to use ML in production, let us know! We have customers using ML in production TODAY. e.g., New York Air Brake
Time for ML demo!
Get the ML App: http://tiny.cc/splunkmlapp
Want more? Take Splunk’s Analytics & Data Science course!
Course prework: http://bit.ly/splunkanalytics
Re: ML App v0.9. To be updated after new release. Stay tuned! Lots to come w/ Splunk ML.
Image modified from cover of book Protecting Study Volunteers in Research
Publisher: CenterWatch LLC; 4th Edition edition (June 15, 2012)
NEXT: either leave slide & discuss OR show ML demo
Re: ML App v0.9. To be updated after new release. Stay tuned! Lots to come w/ Splunk ML.
Image modified from cover of book Protecting Study Volunteers in Research
Publisher: CenterWatch LLC; 4th Edition edition (June 15, 2012)
NEXT: either leave slide & discuss OR show ML demo
A direct customer-Splunk engagement focused on real-world use of the Splunk Enterprise - MachineLearning Toolkit and Showcase app and related SPL commands
Objectives• Help the customer to be successful in the impactful use of ML• Help Splunk to understand customer use cases and product requirements
Details• Splunk Account SE plus PM/Engineering work directly with customer to guide usage, providesupport, note analytics and product requirements and refine product where feasible• Customer participates in the above, developing 1 or more models and putting them in production• Customer agrees to be referenced publically; sharing reasonable detail and business impact• Customer agrees to participate in a set of activities that may include: case study, press quote, use
of logo, PR/AR reference call, video profile
We’re headed to the East Coast!
2 inspired Keynotes – General Session and Security Keynote + Super Sessions with Splunk Leadership in Cloud, IT Ops, Security and Business Analytics!
165+ Breakout sessions addressing all areas and levels of Operational Intelligence – IT, Business Analytics, Mobile, Cloud, IoT, Security…and MORE!
30+ hours of invaluable networking time with industry thought leaders, technologists, and other Splunk Ninjas and Champions waiting to share their business wins with you!
Join the 50%+ of Fortune 100 companies who attended .conf2015 to get hands on with Splunk. You’ll be surrounded by thousands of other like-minded individuals who are ready to share exciting and cutting edge use cases and best practices. You can also deep dive on all things Splunk products together with your favorite Splunkers.
Head back to your company with both practical and inspired new uses for Splunk, ready to unlock the unimaginable power of your data! Arrive in Orlando a Splunk user, leave Orlando a Splunk Ninja!
REGISTRATION OPENS IN MARCH 2016 – STAY TUNED FOR NEWS ON OUR BEST REGISTRATION RATES – COMING SOON!
Time for ML demo!
Get the ML App: http://tiny.cc/splunkmlapp
Want more? Take Splunk’s Analytics & Data Science course!
Course prework: http://bit.ly/splunkanalytics