Check out this LaunchPoint presentation featuring data from Lattice, MuleSoft, and FireEye, to discover how to rev up your marketing engine with the power of predictive lead scoring!
1. Rev Up Your Lead Engine
With Predictive Scoring
Dan Ahmadi Joanna Kwong
2. Housekeeping
• This webinar is being recorded! Slides and recording
will be sent to you after the webinar
• Enter your questions in the chat – we will get to
them after the webinar
• Posting to social? Make sure to use #mktgnation or
tweet @marketo or @Lattice_Engines
8. Lattice Engines Predictive Marketing and Sales
Cloud-based applications to improve conversion at every stage of the revenue funnel.
Find companies
most likely to buy
Find your most
sales-ready leads
ACCOUNT
PRIORITIZATION
LEAD
PRIORITIZATION
2
3
4
Find customers
most likely to buy more
CROSS-SELL/
UP-SELL
5
Find customers
most likely to churn
PROSPECT
DISCOVERY
1
Find net new
prospects
RETENTION
9. Rev Up Your Lead Engine
• Introduction
• Justification &Vendor Selection
• Implementation
• Rolling it Out
• Results
• Q&A
Joanna Kwong
Marketing Automation
Manager,
FireEye
joannakwong
Dan Ahmadi
Sr. Manager, Marketing
Operations and Technology,
MuleSoft
dan_ahmadi
10. About Our Companies
• Security company
• Founded in 2004
• Went public in Sept. 2013
• Acquired Mandiant in Jan.
2014
• 2,500+ employees
• Marketing organization of 150+
• Marketing Ops team of 4
• Connecting the world’s apps,
data, and devices
• Founded in 2006, HQ in San
Francisco
• 500+ employees
• Sales staff in 8 countries
• Marketing organization of 30+
• Marketing Ops team of 1¾
11. Our Traditional Lead Scoring Model
Lead
Score
Demographic
(Profile fit)
Behavior
(Buying
behavior)
Data
Quality
12. Why Go Predictive?
Efficiency
•How do we prioritize our
database?
•How do we prioritize our
inbound leads each
month?
Effectiveness
•How do we improve our
conversion rates?
•How do we improve the
volume of leads sent to
sales?
Investment
•Where do we invest our
marketing spend?
• (Certain kinds of)
activity may not be as
important as we
thought
• Resource constraints – too
much volume, too little
volume
• Already had a scoring
system in place – where
do we go next?
CHALLENGESOBJECTIVES
13. Vendor Selection
• “This is too good to be true.”
Our Predictions Reality
• Yes, predictive works!
14. Vendor Selection
• “This is too good to be true.”
• “Our current scoring system
works!”
Our Predictions Reality
• Yes, predictive works!
• Not as well as we thought.
15. Vendor Selection
• “This is too good to be true.”
• “Our current scoring system
works!”
• “I already know who my target
is, and their title begins with VP
or Chief.”
Our Predictions Reality
• Yes, predictive works!
• Not as well as we thought.
• Our assumptions are wrong.
16. Vendor Selection
• “This is too good to be true.”
• “Our current scoring system
works!”
• “I already know who my target
is, and their title begins with VP
or Chief.”
• “We’ll just choose to most
accurate vendor.”
Our Predictions Reality
• Yes, predictive works!
• Not as well as we thought.
• Our assumptions are wrong.
• Accuracy is not the only
factor!
17. Vendor Selection – Takeaways!
Consider these factors when selecting a vendor:
“Don’t base your #PredictiveScoring vendor selection on
things that won’t matter in the long run.”
Tweetable Moment:
Security Custom integrations Reference calls
Accuracy Roadmap # Models included
Time to score Market share Score output
Time to implement Support/Maintenance Data sources
18. Implementation
• No project involvement battles
Our Predictions Reality
• Everyone wants to be
involved!
“#PredictiveScoring won’t fix your existing data or process
issues. It builds upon them.”
Tweetable Moment:
19. Implementation
• No project involvement battles
• No need for existing lead score
Our Predictions Reality
• Everyone wants to be
involved!
• We need to keep and use it!
“#PredictiveScoring won’t fix your existing data or process
issues. It builds upon them.”
Tweetable Moment:
20. Implementation
• No project involvement battles
• No need for existing lead score
• No need for all of our data
Our Predictions Reality
• Everyone wants to be
involved!
• We need to keep and use it!
• The more data, the better!
“#PredictiveScoring won’t fix your existing data or process
issues. It builds upon them.”
Tweetable Moment:
21. Implementation
• No project involvement battles
• No need for existing lead score
• No need for all of our data
• Underestimated data
preparation
Our Predictions Reality
• Everyone wants to be
involved!
• We need to keep and use it!
• The more data, the better!
• Garbage in, garbage out!
“#PredictiveScoring won’t fix your existing data or process
issues. It builds upon them.”
Tweetable Moment:
22. Implementation
• No project involvement battles
• No need for existing lead score
• No need for all of our data
• Underestimated data
preparation
• No customization needed
Our Predictions Reality
• Everyone wants to be
involved!
• We need to keep and use it!
• The more data, the better!
• Garbage in, garbage out!
• Customization was key – and
needs to be accounted for
“#PredictiveScoring won’t fix your existing data or process
issues. It builds upon them.”
Tweetable Moment:
23. Implementation
• No project involvement battles
• No need for existing lead score
• No need for all of our data
• Underestimated data
preparation
• No customization needed
Our Predictions Reality
• Everyone wants to be
involved!
• We need to keep and use it!
• The more data, the better!
• Garbage in, garbage out!
• Customization was key – and
needs to be accounted for
“#PredictiveScoring won’t fix your existing data or process
issues. It builds upon them.”
Tweetable Moment:
24. Ways to Roll Out Predictive Scoring
Turn off any existing scoring system, and turn on predictive scoring.
If you have an existing scoring system in place, choose a few sales reps to work off of
the predictive MQLs while others continue to work on traditional MQLs.
MQL both your traditional MQLs and your predictive MQLs, and monitor the difference.
Combine the scores to reflect the intersection of predictive fit and engagement
#1 “GO FULL THROTTLE”
#2 “THE BLIND TEST”
#3 “THE TWO WAY TEST”
#4 “THE SCORING COMBO”
>
>
>
>
25. Determine What Triggers a Score
• New Lead
• Click Link
• Visit Web Page
• Interesting Moment
• Open Email
• Email Bounced Soft
• Fill Out Form
• Unsubscribe Email
• Click Email Link
Out of the box
26. Determine What Triggers a Score
• New Lead
• Click Link
• Visit Web Page
• Interesting Moment
• Open Email
• Email Bounced Soft
• Fill Out Form
• Unsubscribe Email
• Click Email Link
Out of the box
EXAMPLE
27. Implementation – Takeaways!
• Predictive scoring can’t be done in a vacuum
• “Friends don’t let friends make a biased predictive model”
• Use the right data to get the right result
• Don’t be afraid to layer scores
• The ability to customize is key – from what goes into your
model to what triggers a score to how your scores are
processed.
“Don’t let bias into your scoring model. Keep your
predictive model predictive!”
Tweetable Moment:
28. Rolling it out
• Sales will love this!
Our Predictions Reality
• Sales was not happy at first.
29. Rolling it out
• Sales will love this!
• We can roll-out with minimal
buy-in
Our Predictions Reality
• Sales was not happy at first.
• You can’t have enough!
30. Rolling it out
• Sales will love this!
• We can roll-out with minimal
buy-in
• We can share raw scores to
our reps
Our Predictions Reality
• Sales was not happy at first.
• You can’t have enough!
• Grades and rankings are
much easier to understand.
31. Predictive Score Grades - MuleSoft
A
65+
B
50-65
C
0-50
We pass grades instead of
actual scores. We made
some modifications for
special cases.
BENEFITS
• No discrimination
amongst specific scores
• All leads are worth a
look
• We adjust C threshold
for capacity.
6
#1 within 24 hours,
100% follow up
1
#1 within 24 hours,
100% follow up
0
Weekly triaging
encouraged
Grade SLA - Attempts
NewMQL
32. Consider a combined grade…
A5 A4 A3 A2 A1 A0
B5 B4 B3 B2 B1 B0
C5 C4 C3 C2 C1 C0
Not so great
Best.
Leads.
Ever.
Ice cold +
great fit
Traditionally
get a high
score
How this works:
Salesforce formula to
combine grade with a count
of campaign responses
33. Predictive Score Rankings – FireEye
Excellent
Likelihood
35+
High Likelihood
20-35
Average
Likelihood
11-19
Low Likelihood
0-10
We pass rankings instead
of actual scores.
BENEFITS
• Ability to control
volume without
disruption
• Ability to modify
definitions without
disruption
• Easier to digest and
interpret
NewMQL
Ranking
3 days
3 days
SLA
0
Weekly
Triaging
0
Weekly
Triaging
34. Rolling it Out – Takeaways!
• Don’t underestimate how much buy-in you need
• Keep reinforcing its effectiveness periodically
• Set the right expectations
• Get ready to fight preconceived ideologies
“Predictive lead scoring does not magically make your
audience full of buyers”
Tweetable Moment:
38. MuleSoft Results
Metric 6M before
Predictive
6M since
Predictive
Impact
Raw lead to
MQL
70.5% 60.9% -13.6%
MQL to
Opportunity
21.2% 29.5% +38.5%
ADR Score
utilization
64% 95% +48.4%
Monthly Avg
Opportunities
- - +24%
39. MuleSoft Results
Metric 6M before
Predictive
6M since
Predictive
Impact
Raw lead to
MQL
70.5% 60.9% -13.6%
MQL to
Opportunity
21.2% 29.5% +38.5%
ADR Score
utilization
64% 95% +48.4%
Monthly Avg
Opportunities
- - +24%
A,
22%
B,
23%
C,
55%
Improved productivity
Stricter SLAs
40. FireEye Results
• Results aren’t immediate
– and aren’t always
what you expect.
• We expected an
increase in conversions,
but saw an increase in
velocity first.
Metric Measurement
Velocity
through sales
cycle
35% more likely to
progress through
sales cycle if from a
high predictive
MQL
Observation #1: Increased Velocity
41. FireEye Results
• Results aren’t immediate
– and aren’t always
what you expect.
• We expected an
increase in conversions,
but saw an increase in
velocity first.
• *Our challenge – Moving
to an account-based
model with predictive
scoring limits our visibility.
Metric Impact
MQL > SAL
Conversion Rate
2x increase*
Metric Measurement
Velocity
through sales
cycle
35% more likely to
progress through
sales cycle if from a
high predictive
MQL
Observation #1: Increased Velocity
Observation #2: Increased Conversions
This wasn’t just luck. They looked at all the past data of people who’ve bought this backpack and ones very close to it, and this is what they also bought.
This wasn’t just luck. They looked at all the past data of people who’ve bought this backpack and ones very close to it, and this is what they also bought.
Now for a B2B example of the Amazon concept…
At the tip of the iceberg is the demographic (“people”) data that you know. It’s shallow.
But what a predictive vendor can add is all that other firmagraphic information you wouldn’t know on your own.
1000s of data sources
Now for a B2B example of the Amazon concept…
At the tip of the iceberg is the demographic (“people”) data that you know. It’s shallow.
But what a predictive vendor can add is all that other firmagraphic information you wouldn’t know on your own.
1000s of data sources
WHY DO PEOPLE BUY? They’re trying to solve one of these problems.
Lattice provides predictive marketing and sales applications that improve conversion at every stage of the revenue funnel.
Our applications combine billions of buying signals and apply advanced machine learning to help you find companies who are most likely to buy, what they’re likely to buy and when. Help me understand a little bit about your company – so that I can tailor our demo for you.
Want to improve the performance of our outbound program? SDRs?
Getting started with a named account/target account/ABM program?
Sales reps overwhelmed by lots of leads?
Looking to sell more to existing customers?
Strong storyline: More of a cohesive story. Add the how and the why.
Dan’s notes:
Full Visibility – We have an open data/information policy and sync all leads to salesforce. It doesn’t mean we don’t have to spearate the qualified ones from the not-so-qualified. As our inbound leads doubled and tripled year over year, we had to find a way to prioritize them for our team. We wanted to be able to say – focus on this group, don’t focus on these at all. We werent confident in our home-grown scoring solution to be able to do that.
We have a lead score? When we surveyed the team, we found that over time the reps had lost faith in the scoring model, and excluded it from their views entirely. New reps didn’t even know we had a score.
Activity – as our company made the transition to an account based sales model and started to go outbound, the number of web visits were less and less important. A CIO/exec isnt going to have time to engage with us over email/web, and when we weighted their titles with our homegrown scoring, we elevated all the SMB CEOs we didn’t care about. We needed something smarter.
Joanna’s notes:
Already had a working lead scoring system in place, but started to ask ourselves how we could get more intelligent with our system
Wanted to optimize our marketing and sales teams’ efficiencies
We grew from 100 to 2,500+ employees in just 3 years – what worked before no longer worked and we needed to evaluate
Dans notes
“Hey I heard you’re working on a predictive model. I’d like to be part of that discussion.”
Being involved in our model – EVERYONE wanted in. There were always alternate motives for the involvement. We found that stakeholders wanted to make sure THEIR lead source / programs got pushed to the highest score. They felt like they had to campaign to us and prove the quality of their leads.
What do we do with our existing scoring? We decided to keep it. Although the quality of a lead can be determined by a predictive model, knowing the activity level still helps. Think of it as a secondary layer on top of score. We use it to resurface active – high quality – leads now.
Using all our data for the best result – we thought if we just let it eat everyhting up, we’ll get the best result. Well there’s this thing called overoptimization. For example, if a data field is missing for 75% of your database, and you recently started collecting it, the engine will say not having it is more likely to create an opp than having it. That’s bad news. Don’t include data that is incomplete. Also, don’t use data that you don’t have once the lead comes in!
-Joanna-
Should we include a slide on how we standardized or prepared our data?
Preparing data foundation was actually a lot of work that allowed us to really utilize predictive scoring. This was not an easy task, but it was an essential one. We needed to make sure that our data was available for incorporation into our model – that meant standardized values, etc.
Lead scoring does not fix your existing data or process issues. If you have junk in your system, it will still surface. You have to have ways to ensure that it won’t.
Predictive scoring also isn’t entirely out of the box. There are still campaigns to be built – e.g., we heavily utilize interesting moments.
Dans notes
“Hey I heard you’re working on a predictive model. I’d like to be part of that discussion.”
Being involved in our model – EVERYONE wanted in. There were always alternate motives for the involvement. We found that stakeholders wanted to make sure THEIR lead source / programs got pushed to the highest score. They felt like they had to campaign to us and prove the quality of their leads.
What do we do with our existing scoring? We decided to keep it. Although the quality of a lead can be determined by a predictive model, knowing the activity level still helps. Think of it as a secondary layer on top of score. We use it to resurface active – high quality – leads now.
Using all our data for the best result – we thought if we just let it eat everyhting up, we’ll get the best result. Well there’s this thing called overoptimization. For example, if a data field is missing for 75% of your database, and you recently started collecting it, the engine will say not having it is more likely to create an opp than having it. That’s bad news. Don’t include data that is incomplete. Also, don’t use data that you don’t have once the lead comes in!
-Joanna-
Should we include a slide on how we standardized or prepared our data?
Preparing data foundation was actually a lot of work that allowed us to really utilize predictive scoring. This was not an easy task, but it was an essential one. We needed to make sure that our data was available for incorporation into our model – that meant standardized values, etc.
Lead scoring does not fix your existing data or process issues. If you have junk in your system, it will still surface. You have to have ways to ensure that it won’t.
Predictive scoring also isn’t entirely out of the box. There are still campaigns to be built – e.g., we heavily utilize interesting moments.
Dans notes
“Hey I heard you’re working on a predictive model. I’d like to be part of that discussion.”
Being involved in our model – EVERYONE wanted in. There were always alternate motives for the involvement. We found that stakeholders wanted to make sure THEIR lead source / programs got pushed to the highest score. They felt like they had to campaign to us and prove the quality of their leads.
What do we do with our existing scoring? We decided to keep it. Although the quality of a lead can be determined by a predictive model, knowing the activity level still helps. Think of it as a secondary layer on top of score. We use it to resurface active – high quality – leads now.
Using all our data for the best result – we thought if we just let it eat everyhting up, we’ll get the best result. Well there’s this thing called overoptimization. For example, if a data field is missing for 75% of your database, and you recently started collecting it, the engine will say not having it is more likely to create an opp than having it. That’s bad news. Don’t include data that is incomplete. Also, don’t use data that you don’t have once the lead comes in!
-Joanna-
Should we include a slide on how we standardized or prepared our data?
Preparing data foundation was actually a lot of work that allowed us to really utilize predictive scoring. This was not an easy task, but it was an essential one. We needed to make sure that our data was available for incorporation into our model – that meant standardized values, etc.
Lead scoring does not fix your existing data or process issues. If you have junk in your system, it will still surface. You have to have ways to ensure that it won’t.
Predictive scoring also isn’t entirely out of the box. There are still campaigns to be built – e.g., we heavily utilize interesting moments.
Dans notes
“Hey I heard you’re working on a predictive model. I’d like to be part of that discussion.”
Being involved in our model – EVERYONE wanted in. There were always alternate motives for the involvement. We found that stakeholders wanted to make sure THEIR lead source / programs got pushed to the highest score. They felt like they had to campaign to us and prove the quality of their leads.
What do we do with our existing scoring? We decided to keep it. Although the quality of a lead can be determined by a predictive model, knowing the activity level still helps. Think of it as a secondary layer on top of score. We use it to resurface active – high quality – leads now.
Using all our data for the best result – we thought if we just let it eat everyhting up, we’ll get the best result. Well there’s this thing called overoptimization. For example, if a data field is missing for 75% of your database, and you recently started collecting it, the engine will say not having it is more likely to create an opp than having it. That’s bad news. Don’t include data that is incomplete. Also, don’t use data that you don’t have once the lead comes in!
-Joanna-
Should we include a slide on how we standardized or prepared our data?
Preparing data foundation was actually a lot of work that allowed us to really utilize predictive scoring. This was not an easy task, but it was an essential one. We needed to make sure that our data was available for incorporation into our model – that meant standardized values, etc.
Lead scoring does not fix your existing data or process issues. If you have junk in your system, it will still surface. You have to have ways to ensure that it won’t.
Predictive scoring also isn’t entirely out of the box. There are still campaigns to be built – e.g., we heavily utilize interesting moments.
Joanna take some – I’ll take others – or you can do the whole thing
Dans notes
“Hey I heard you’re working on a predictive model. I’d like to be part of that discussion.”
Being involved in our model – EVERYONE wanted in. There were always alternate motives for the involvement. We found that stakeholders wanted to make sure THEIR lead source / programs got pushed to the highest score. They felt like they had to campaign to us and prove the quality of their leads.
What do we do with our existing scoring? We decided to keep it. Although the quality of a lead can be determined by a predictive model, knowing the activity level still helps. Think of it as a secondary layer on top of score. We use it to resurface active – high quality – leads now.
Using all our data for the best result – we thought if we just let it eat everyhting up, we’ll get the best result. Well there’s this thing called overoptimization. For example, if a data field is missing for 75% of your database, and you recently started collecting it, the engine will say not having it is more likely to create an opp than having it. That’s bad news. Don’t include data that is incomplete. Also, don’t use data that you don’t have once the lead comes in!
-Joanna-
Should we include a slide on how we standardized or prepared our data?
Preparing data foundation was actually a lot of work that allowed us to really utilize predictive scoring. This was not an easy task, but it was an essential one. We needed to make sure that our data was available for incorporation into our model – that meant standardized values, etc.
Lead scoring does not fix your existing data or process issues. If you have junk in your system, it will still surface. You have to have ways to ensure that it won’t.
Predictive scoring also isn’t entirely out of the box. There are still campaigns to be built – e.g., we heavily utilize interesting moments.
Dans notes
“Hey I heard you’re working on a predictive model. I’d like to be part of that discussion.”
Being involved in our model – EVERYONE wanted in. There were always alternate motives for the involvement. We found that stakeholders wanted to make sure THEIR lead source / programs got pushed to the highest score. They felt like they had to campaign to us and prove the quality of their leads.
What do we do with our existing scoring? We decided to keep it. Although the quality of a lead can be determined by a predictive model, knowing the activity level still helps. Think of it as a secondary layer on top of score. We use it to resurface active – high quality – leads now.
Using all our data for the best result – we thought if we just let it eat everyhting up, we’ll get the best result. Well there’s this thing called overoptimization. For example, if a data field is missing for 75% of your database, and you recently started collecting it, the engine will say not having it is more likely to create an opp than having it. That’s bad news. Don’t include data that is incomplete. Also, don’t use data that you don’t have once the lead comes in!
-Joanna-
Should we include a slide on how we standardized or prepared our data?
Preparing data foundation was actually a lot of work that allowed us to really utilize predictive scoring. This was not an easy task, but it was an essential one. We needed to make sure that our data was available for incorporation into our model – that meant standardized values, etc.
Lead scoring does not fix your existing data or process issues. If you have junk in your system, it will still surface. You have to have ways to ensure that it won’t.
Predictive scoring also isn’t entirely out of the box. There are still campaigns to be built – e.g., we heavily utilize interesting moments.
Dans notes
“Hey I heard you’re working on a predictive model. I’d like to be part of that discussion.”
Being involved in our model – EVERYONE wanted in. There were always alternate motives for the involvement. We found that stakeholders wanted to make sure THEIR lead source / programs got pushed to the highest score. They felt like they had to campaign to us and prove the quality of their leads.
What do we do with our existing scoring? We decided to keep it. Although the quality of a lead can be determined by a predictive model, knowing the activity level still helps. Think of it as a secondary layer on top of score. We use it to resurface active – high quality – leads now.
Using all our data for the best result – we thought if we just let it eat everyhting up, we’ll get the best result. Well there’s this thing called overoptimization. For example, if a data field is missing for 75% of your database, and you recently started collecting it, the engine will say not having it is more likely to create an opp than having it. That’s bad news. Don’t include data that is incomplete. Also, don’t use data that you don’t have once the lead comes in!
-Joanna-
Should we include a slide on how we standardized or prepared our data?
Preparing data foundation was actually a lot of work that allowed us to really utilize predictive scoring. This was not an easy task, but it was an essential one. We needed to make sure that our data was available for incorporation into our model – that meant standardized values, etc.
Lead scoring does not fix your existing data or process issues. If you have junk in your system, it will still surface. You have to have ways to ensure that it won’t.
Predictive scoring also isn’t entirely out of the box. There are still campaigns to be built – e.g., we heavily utilize interesting moments.
Dans notes
“Hey I heard you’re working on a predictive model. I’d like to be part of that discussion.”
Being involved in our model – EVERYONE wanted in. There were always alternate motives for the involvement. We found that stakeholders wanted to make sure THEIR lead source / programs got pushed to the highest score. They felt like they had to campaign to us and prove the quality of their leads.
What do we do with our existing scoring? We decided to keep it. Although the quality of a lead can be determined by a predictive model, knowing the activity level still helps. Think of it as a secondary layer on top of score. We use it to resurface active – high quality – leads now.
Using all our data for the best result – we thought if we just let it eat everyhting up, we’ll get the best result. Well there’s this thing called overoptimization. For example, if a data field is missing for 75% of your database, and you recently started collecting it, the engine will say not having it is more likely to create an opp than having it. That’s bad news. Don’t include data that is incomplete. Also, don’t use data that you don’t have once the lead comes in!
-Joanna-
Should we include a slide on how we standardized or prepared our data?
Preparing data foundation was actually a lot of work that allowed us to really utilize predictive scoring. This was not an easy task, but it was an essential one. We needed to make sure that our data was available for incorporation into our model – that meant standardized values, etc.
Lead scoring does not fix your existing data or process issues. If you have junk in your system, it will still surface. You have to have ways to ensure that it won’t.
Predictive scoring also isn’t entirely out of the box. There are still campaigns to be built – e.g., we heavily utilize interesting moments.
Dans notes
“Hey I heard you’re working on a predictive model. I’d like to be part of that discussion.”
Being involved in our model – EVERYONE wanted in. There were always alternate motives for the involvement. We found that stakeholders wanted to make sure THEIR lead source / programs got pushed to the highest score. They felt like they had to campaign to us and prove the quality of their leads.
What do we do with our existing scoring? We decided to keep it. Although the quality of a lead can be determined by a predictive model, knowing the activity level still helps. Think of it as a secondary layer on top of score. We use it to resurface active – high quality – leads now.
Using all our data for the best result – we thought if we just let it eat everyhting up, we’ll get the best result. Well there’s this thing called overoptimization. For example, if a data field is missing for 75% of your database, and you recently started collecting it, the engine will say not having it is more likely to create an opp than having it. That’s bad news. Don’t include data that is incomplete. Also, don’t use data that you don’t have once the lead comes in!
-Joanna-
Should we include a slide on how we standardized or prepared our data?
Preparing data foundation was actually a lot of work that allowed us to really utilize predictive scoring. This was not an easy task, but it was an essential one. We needed to make sure that our data was available for incorporation into our model – that meant standardized values, etc.
Lead scoring does not fix your existing data or process issues. If you have junk in your system, it will still surface. You have to have ways to ensure that it won’t.
Predictive scoring also isn’t entirely out of the box. There are still campaigns to be built – e.g., we heavily utilize interesting moments.
Dans notes
“Hey I heard you’re working on a predictive model. I’d like to be part of that discussion.”
Being involved in our model – EVERYONE wanted in. There were always alternate motives for the involvement. We found that stakeholders wanted to make sure THEIR lead source / programs got pushed to the highest score. They felt like they had to campaign to us and prove the quality of their leads.
What do we do with our existing scoring? We decided to keep it. Although the quality of a lead can be determined by a predictive model, knowing the activity level still helps. Think of it as a secondary layer on top of score. We use it to resurface active – high quality – leads now.
Using all our data for the best result – we thought if we just let it eat everyhting up, we’ll get the best result. Well there’s this thing called overoptimization. For example, if a data field is missing for 75% of your database, and you recently started collecting it, the engine will say not having it is more likely to create an opp than having it. That’s bad news. Don’t include data that is incomplete. Also, don’t use data that you don’t have once the lead comes in!
-Joanna-
Should we include a slide on how we standardized or prepared our data?
Preparing data foundation was actually a lot of work that allowed us to really utilize predictive scoring. This was not an easy task, but it was an essential one. We needed to make sure that our data was available for incorporation into our model – that meant standardized values, etc.
Lead scoring does not fix your existing data or process issues. If you have junk in your system, it will still surface. You have to have ways to ensure that it won’t.
Predictive scoring also isn’t entirely out of the box. There are still campaigns to be built – e.g., we heavily utilize interesting moments.
-- Joanna--
Dans notes
Letting bias into your model defeats the entire purpose of predictive scoring. Keep it predictive, let it do its magic.
Use data you’re confident in – worst case, use just the email address and let the vendor add their own data to it.
If you didn’t care about a GEO and now you do, make sure to exclude that data from your model, or else all those leads will be scored unfairly.
Dans notes
“Hey I heard you’re working on a predictive model. I’d like to be part of that discussion.”
Being involved in our model – EVERYONE wanted in. There were always alternate motives for the involvement. We found that stakeholders wanted to make sure THEIR lead source / programs got pushed to the highest score. They felt like they had to campaign to us and prove the quality of their leads.
What do we do with our existing scoring? We decided to keep it. Although the quality of a lead can be determined by a predictive model, knowing the activity level still helps. Think of it as a secondary layer on top of score. We use it to resurface active – high quality – leads now.
Using all our data for the best result – we thought if we just let it eat everyhting up, we’ll get the best result. Well there’s this thing called overoptimization. For example, if a data field is missing for 75% of your database, and you recently started collecting it, the engine will say not having it is more likely to create an opp than having it. That’s bad news. Don’t include data that is incomplete. Also, don’t use data that you don’t have once the lead comes in!
-Joanna-
Should we include a slide on how we standardized or prepared our data?
Preparing data foundation was actually a lot of work that allowed us to really utilize predictive scoring. This was not an easy task, but it was an essential one. We needed to make sure that our data was available for incorporation into our model – that meant standardized values, etc.
Lead scoring does not fix your existing data or process issues. If you have junk in your system, it will still surface. You have to have ways to ensure that it won’t.
Predictive scoring also isn’t entirely out of the box. There are still campaigns to be built – e.g., we heavily utilize interesting moments.
Dans notes
“Hey I heard you’re working on a predictive model. I’d like to be part of that discussion.”
Being involved in our model – EVERYONE wanted in. There were always alternate motives for the involvement. We found that stakeholders wanted to make sure THEIR lead source / programs got pushed to the highest score. They felt like they had to campaign to us and prove the quality of their leads.
What do we do with our existing scoring? We decided to keep it. Although the quality of a lead can be determined by a predictive model, knowing the activity level still helps. Think of it as a secondary layer on top of score. We use it to resurface active – high quality – leads now.
Using all our data for the best result – we thought if we just let it eat everyhting up, we’ll get the best result. Well there’s this thing called overoptimization. For example, if a data field is missing for 75% of your database, and you recently started collecting it, the engine will say not having it is more likely to create an opp than having it. That’s bad news. Don’t include data that is incomplete. Also, don’t use data that you don’t have once the lead comes in!
-Joanna-
Should we include a slide on how we standardized or prepared our data?
Preparing data foundation was actually a lot of work that allowed us to really utilize predictive scoring. This was not an easy task, but it was an essential one. We needed to make sure that our data was available for incorporation into our model – that meant standardized values, etc.
Lead scoring does not fix your existing data or process issues. If you have junk in your system, it will still surface. You have to have ways to ensure that it won’t.
Predictive scoring also isn’t entirely out of the box. There are still campaigns to be built – e.g., we heavily utilize interesting moments.
Dans notes
“Hey I heard you’re working on a predictive model. I’d like to be part of that discussion.”
Being involved in our model – EVERYONE wanted in. There were always alternate motives for the involvement. We found that stakeholders wanted to make sure THEIR lead source / programs got pushed to the highest score. They felt like they had to campaign to us and prove the quality of their leads.
What do we do with our existing scoring? We decided to keep it. Although the quality of a lead can be determined by a predictive model, knowing the activity level still helps. Think of it as a secondary layer on top of score. We use it to resurface active – high quality – leads now.
Using all our data for the best result – we thought if we just let it eat everyhting up, we’ll get the best result. Well there’s this thing called overoptimization. For example, if a data field is missing for 75% of your database, and you recently started collecting it, the engine will say not having it is more likely to create an opp than having it. That’s bad news. Don’t include data that is incomplete. Also, don’t use data that you don’t have once the lead comes in!
-Joanna-
Should we include a slide on how we standardized or prepared our data?
Preparing data foundation was actually a lot of work that allowed us to really utilize predictive scoring. This was not an easy task, but it was an essential one. We needed to make sure that our data was available for incorporation into our model – that meant standardized values, etc.
Lead scoring does not fix your existing data or process issues. If you have junk in your system, it will still surface. You have to have ways to ensure that it won’t.
Predictive scoring also isn’t entirely out of the box. There are still campaigns to be built – e.g., we heavily utilize interesting moments.