1. Google Cloud developer tools + an
Easyier path to machine learning
Wesley Chun
Developer Advocate, Google
Adjunct CS Faculty, Foothill College
2.
3. G Suite Dev Show
goo.gl/JpBQ40
About the speaker (not a data scientist!)
Developer Advocate, Google Cloud
● Mission: enable current and future
developers everywhere to be
successful using Google Cloud and
other Google developer tools & APIs
● Videos: host of the G Suite Dev Show
on YouTube
● Blogs: developers.googleblog.com &
gsuite-developers.googleblog.com
● Twitters: @wescpy, @GoogleDevs,
@GSuiteDevs
Previous experience / background
● Software engineer & architect for 20+ years
● One of the original Yahoo!Mail engineers
● Author of bestselling "Core Python" books
(corepython.com)
● Technical trainer, teacher, instructor since
1983 (Computer Science, C, Linux, Python)
● Fellow of the Python Software Foundation
● AB (Math/CS) & CMP (Music/Piano), UC
Berkeley and MSCS, UC Santa Barbara
● Adjunct Computer Science Faculty, Foothill
College (Silicon Valley)
Why and Agenda
● Big data is everywhere now
● Need the power of AI to help analyze
● Requires certain level of math/statistics background
● AI/ML has somewhat steep learning curve
● APIs powered by ML helps ease this burden
● If you can call APIs, you can use ML!
1
Intro to machine
learning
2
Intro to Google
Cloud
3
Google APIs
4
Cloud ML APIs
5
Other APIs to
consider
6
All of Cloud
(inspiration)
7
Summary &
wrap-up
4. Caveat: I am NOT a data scientist.
(I studied NW/distributed systems in
school.) Like many of you, I've seen the
rise of big data and gotten caught up in
the excitement of ML, so this is also part
of my journey. However...
I can call APIs, therefore I am.
What is machine learning?
AI, ML, and making computers smarter; to help us
understand more and get more insights than before
1
6. AI & Machine Learning
Puppy or muffin?
Source:
twistedsifter.com/2016/03/puppy-or-bagel-meme-gallery
Machine learning is learning
from rules plus experience.
8. AI
Make code solve
problems commonly
associated with
human intelligence
ML
Make code learn
from experience
instead of explicit
programming
DL
ML using deep neural
networks… make
code learn to be
even better/smarter
ML @ Google
How has ML improved our products?
10. Google Photos
Did you ever stop
to notice this app
has a search bar?!?
2012 2013 2014 2015 2016
0
1000
2000
3000
4000
# of directories containing neural net model description files
Use of Deep Learning at Google accelerated rapidly
11. "ML in the wild"
How has ML enabled what was difficult/impossible to do before?
Global view
Problem
● 1B ppl depend on seafood
● 85% at/over-fishing or recovering
● 20% caught illegal, undoc'd, unreg'd
● Analysts monitoring unscalable
One solution
● globalfishingwatch.org/map
● Machine-learning classifiers:
○ Ship type: cargo, tug, sail, fishing
○ Ship size
○ Gear: longline, purse seine, trawl
○ Movement tracking: when and
where vessels are fishing
12. A major food manufacturer in Japan
hard to find the bad potato cubes
Finding the bad
potato cubes
13. A taste of ML
Fun Google-created tools to explore AI/ML
quickdraw.withgoogle.com
Google Cloud Vision demo "experiment"
experiments.withgoogle.com/quick-draw
14. vision-explorer.reactive.ai
NEXT '16: Google Cloud Vision demo
cloud.google.com/blog/products/gcp/explore-the-galaxy-of-images-with-cloud-
vision-api and open-source repo at github.com/cogentlabs/cloud-vision-explorer
How to get started
Enough talk, let's think about first steps
15. Lots of data
Complex mathematics in
multidimensional spaces
Magical results
Popular imagination of what Machine Learning is
Organize data
Use machines to
flesh out the
model from data
Collect
data
Create model
Deploy fleshed
out model
In reality what ML is
16. Large Datasets Good Models Lots Of Computation
Keys to Successful Machine Learning
Activity Detection
if(speed<4){
status=WALKING;
}
if(speed<4){
status=WALKING;
} else {
status=RUNNING;
}
if(speed<4){
status=WALKING;
} else if(speed<12){
status=RUNNING;
} else {
status=BIKING;
}
// Oh crap
19. Fashion MNIST
● 70k grayscale images
○ 60k training set
○ 10k testing set
● 10 categories
● Images: 28x28 pixels
● Go train a neural net!
tensorflow.org/tutorials/
keras/classification
import tensorflow as tf
from tensorflow import keras
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
09 09 = ankle boot;
踝靴;
アンクルブーツ;
Bróg rúitín
20. Your steps
1. Import MNIST dataset
2. Explore/preprocess data
3. Build model
a. Setup layers
b. Compile model
4. Train model
5. Evaluate accuracy
6. Make predictions
7. (Have fun!)
2 Introduction to
Google Cloud
GCP and G Suite tools & APIs
22. How can Google Cloud help (industry)?
● What can we provide developers?
○ Virtual machines, GPUs, and variety of data storage
○ Ability to craft & design your own network/subnet
○ Pre-trained machine learning models
○ Container-hosting, ML build & deploy infrastructure
○ Serverless compute & data services
○ Additional or emergency compute & storage capacity
○ Productivity tools you already use (G Suite)
ML "building block" APIs
● Gain insights from data using GCP's pre-trained
machine learning models
● Leverage the same technology as Google
Translate, Photos, and Assistant
● Requires ZERO prior knowledge of ML
● If you can call an API, you can use AI/ML!
● cloud.google.com/products/ai/building-blocks
Vision Video
Intelligence
Speech
(S2T & T2S)
Natural
Language
Translation
23. Full Spectrum of AI & ML Offerings
App developer Data scientist,
developer
Data scientist, Researcher
(w/infrastructure access &
DevOps/SysAdmin skills)
ML EngineAuto ML
Build custom models,
use OSS SDK on fully-
managed infrastructure
ML APIs
App developer,
data scientist
Use/customize pre-built
models
Use pre-built/pre-
trained models
Build custom models, use/
extend OSS SDK, self-manage
training infrastructure
3 Google APIs
What are they? How do you use them?
25. ● General steps
1. console.cloud.google.com
2. Login to Google/Gmail account (G Suite
accounts may require administrator approval)
3. Create project (per application)
4. Enable APIs you want to use
5. Enable billing
6. Download client library(ies)
7. Create & download credentials
8. Write your code
9. Run your code (may need to authorize)
Google APIs: how to use
Costs and pricing
● GCP: pay-per-use
● (G Suite: monthly subscription)
● GCP Free Trial ($300/yr, CC req'd)
● GCP "Always Free" tier
○ Most products have free tier
○ Daily or monthly quota
○ Must exceed to incur billing
● More on both programs at
cloud.google.com/free
26. Cloud/GCP console
console.cloud.google.com
● Hub of all developer activity
● Applications == projects
○ New project for new apps
○ Projects have a billing acct
● Manage billing accounts
○ Financial instrument required
○ Personal or corporate credit cards,
Free Trial, and education grants
● Access GCP product settings
● Manage users & security
● Manage APIs in devconsole
● View application statistics
● En-/disable Google APIs
● Obtain application credentials
Using Google APIs
goo.gl/RbyTFD
API manager aka Developers Console (devconsole)
console.developers.google.com
27. &
Google APIs client
libraries for many
languages; demos in
developers.google.com/api-
client-library
cloud.google.com/apis/docs
/cloud-client-libraries
4 Cloud ML APIs
Easier path to ML by simply calling APIs!
28. Machine Learning: Cloud Vision
Google Cloud Vision API
cloud
from google.cloud import vision
image_uri = 'gs://cloud-samples-data/vision/using_curl/shanghai.jpeg'
client = vision.ImageAnnotatorClient()
image = vision.types.Image()
image.source.image_uri = image_uri
response = client.label_detection(image=image)
print('Labels (and confidence score):')
print('=' * 30)
for label in response.label_annotations:
print(f'{label.description} ({label.score*100.:.2f}%)')
Vision: label annotation/object detection
29. $ python3 label-detect.py
Labels (and confidence score):
==============================
People (95.05%)
Street (89.12%)
Mode of transport (89.09%)
Transport (85.13%)
Vehicle (84.69%)
Snapshot (84.11%)
Urban area (80.29%)
Infrastructure (73.14%)
Road (72.74%)
Pedestrian (68.90%)
Vision: label annotation/object detection
g.co/codelabs/vision-python
from google.cloud import vision
image_uri = 'gs://cloud-vision-codelab/otter_crossing.jpg'
client = vision.ImageAnnotatorClient()
image = vision.types.Image()
image.source.image_uri = image_uri
response = client.text_detection(image=image)
for text in response.text_annotations:
print('=' * 30)
print(f'"{text.description}"')
vertices = [f'({v.x},{v.y})' for v in text.bounding_poly.vertices]
print(f'bounds: {",".join(vertices)}')
Vision: OCR, text detection/extraction
30. $ python3 text-detect.py
==============================
"CAUTION
Otters crossing
for next 6 miles
"
bounds: (61,243),(251,243),(251,340),(61,340)
==============================
"CAUTION"
bounds: (75,245),(235,243),(235,269),(75,271)
==============================
"Otters"
bounds: (65,296),(140,297),(140,315),(65,314)
==============================
"crossing"
bounds: (151,294),(247,295),(247,317),(151,316)
:
Vision: OCR, text detection/extraction
g.co/codelabs/vision-python
from google.cloud import vision
image_uri = 'gs://cloud-vision-codelab/eiffel_tower.jpg'
client = vision.ImageAnnotatorClient()
image = vision.types.Image()
image.source.image_uri = image_uri
response = client.landmark_detection(image=image)
for landmark in response.landmark_annotations:
print('=' * 30)
print(landmark)
Vision: landmark detection, entity extraction
32. Machine Learning: Cloud Natural Language
Google Cloud Natural Language API
cloud
Simple sentiment & classification analysis
TEXT = '''Google, headquartered in Mountain View, unveiled the new
Android phone at the Consumer Electronics Show. Sundar Pichai said
in his keynote that users love their new Android phones.'''
print('TEXT:', TEXT)
data = {'type': 'PLAIN_TEXT', 'content': TEXT}
NL = discovery.build('language', 'v1', developerKey=API_KEY)
# sentiment analysis
sent = NL.documents().analyzeSentiment(
body={'document': data}).execute().get('documentSentiment')
print('nSENTIMENT: score (%.2f), magnitude (%.2f)' % (
sent['score'], sent['magnitude']))
# content classification
print('nCATEGORIES:')
cats = NL.documents().classifyText(body={'document': data}).execute().get('categories')
for cat in cats:
print('* %s (%.2f)' % (cat['name'][1:], cat['confidence']))
33. Simple sentiment & classification analysis
$ python nl_sent_simple.py
TEXT: Google, headquartered in Mountain View, unveiled the new Android
phone at the Consumer Electronics Show. Sundar Pichai said in
his keynote that users love their new Android phones.
SENTIMENT: score (0.30), magnitude (0.60)
CATEGORIES:
* Internet & Telecom (0.76)
* Computers & Electronics (0.64)
* News (0.56)
Machine Learning: Cloud Speech
Google Cloud Speech APIs
cloud
cloud
34. Text-to-Speech: synthesizing audio text
# request body (with text body using 16-bit linear PCM audio encoding)
body = {
'input': {'text': text},
'voice': {
'languageCode': 'en-US',
'ssmlGender': 'FEMALE',
},
'audioConfig': {'audioEncoding': 'LINEAR16'},
}
# call Text-to-Speech API to synthesize text (write to text.wav file)
T2S = discovery.build('texttospeech', 'v1', developerKey=API_KEY)
audio = T2S.text().synthesize(body=body).execute().get('audioContent')
with open('text.wav', 'wb') as f:
f.write(base64.b64decode(audio))
Speech-to-Text: transcribing audio text
# request body (16-bit linear PCM audio content, i.e., from text.wav)
body = {
'audio': {'content': audio},
'config': {
'languageCode': 'en-US',
'encoding': 'LINEAR16',
},
}
# call Speech-to-Text API to recognize text
S2T = discovery.build('speech', 'v1', developerKey=API_KEY)
rsp = S2T.speech().recognize(
body=body).execute().get('results')[0]['alternatives'][0]
print('** %.2f%% confident of this transcript:n%r' % (
rsp['confidence']*100., rsp['transcript']))
35. Speech-to-Text: transcribing audio text
$ python s2t_demo.py
** 92.03% confident of this transcript:
'Google headquarters in Mountain View unveiled the new
Android phone at the Consumer Electronics Show Sundar
pichai said in his keynote that users love their new
Android phones'
Machine Learning: Cloud Video Intelligence
Google Cloud Video Intelligence
API
cloud
36. Video intelligence: make videos searchable
# request body (single payload, base64 binary video)
body = {
"inputContent": video,
"features": ['LABEL_DETECTION', 'SPEECH_TRANSCRIPTION'],
"videoContext": {"speechTranscriptionConfig": {"languageCode": 'en-US'}},
}
# perform video shot analysis followed by speech analysis
VINTEL = discovery.build('videointelligence', 'v1', developerKey=API_KEY)
resource = VINTEL.videos().annotate(body=body).execute().get('name')
while True:
results = VINTEL.operations().get(name=resource).execute()
if results.get('done'):
break
time.sleep(random.randrange(8)) # expo-backoff probably better
Video intelligence: make videos searchable
# display shot labels followed by speech transcription
for labels in results['response']['annotationResults']:
if 'shotLabelAnnotations' in labels:
print('n** Video shot analysis labeling')
for shot in labels['shotLabelAnnotations']:
seg = shot['segments'][0]
print(' - %s (%.2f%%)' % (
shot['entity']['description'], seg['confidence']*100.))
if 'speechTranscriptions' in labels:
print('** Speech transcription')
speech = labels['speechTranscriptions'][0]['alternatives'][0]
print(' - %r (%.2f%%)' % (
speech['transcript'], speech['confidence']*100.))
37. Video intelligence: make videos searchable
$ python3 vid_demo.py you-need-a-hug.mp4
** Video shot analysis labeling
- vacation (30.62%)
- fun (61.53%)
- interaction (38.93%)
- summer (57.10%)
** Speech transcription
- 'you need a hug come here' (79.27%)
Machine Learning: Cloud Translation
Google Translate
cloud
38. Translating text "Hello World"
const {Translate} = require('@google-cloud/translate');
const translate = new Translate({projectConfig});
const text = 'Hello World!'; // Text to translate
const target = 'ru'; // Target language
// Translate text to Russian
const translation = await translate.translate(text,
{from: 'en', to: target}));
// Translation: Привет, мир!
console.log('Translation:', translation[0]);
Translation methods
const translate = new Translate({projectConfig});
// Translate (auto-detected language) text to target language
const translation = await translate.translate(text, target);
// List supported languages with abbreviations
const languages = await translate.getLanguages();
// Detect given language
const language = await translate.detect('сайн байна уу');
39. Machine Learning: AutoML
AutoML:
cloud
cloud
● General steps
a. Prep your training data
b. Create dataset
c. Import items into dataset
d. Create/train model
e. Evaluate/validate model
f. Make predictions
Cloud AutoML: how to use
40. Machine Learning: Cloud ML Engine
Google Cloud Machine Learning Engine
cloud
Other tools/APIs to consider
These may also be helpful5
41. Storing and Analyzing Data: BigQuery
Google BigQuery
cloud
BigQuery: querying Shakespeare words
TITLE = "The most common words in all of Shakespeare's works"
QUERY = '''
SELECT LOWER(word) AS word, sum(word_count) AS count
FROM [bigquery-public-data:samples.shakespeare]
GROUP BY word ORDER BY count DESC LIMIT 10
'''
rsp = BQ.jobs().query(body={'query': QUERY}, projectId=PROJ_ID).execute()
print('n*** Results for %r:n' % TITLE)
print('t'.join(col['name'].upper() # HEADERS
for col in rsp['schema']['fields']))
print('n'.join('t'.join(str(col['v']) # DATA
for col in row['f']) for row in rsp['rows']))
42. Top 10 most common Shakespeare words
$ python bq_shake.py
*** Results for "The most common words in all of Shakespeare's works":
WORD COUNT
the 29801
and 27529
i 21029
to 20957
of 18514
a 15370
you 14010
my 12936
in 11722
that 11519
● BigQuery public data sets: cloud.google.com/bigquery/public-data
● BQ sandbox (1TB/mo free): cloud.google.com/bigquery/docs/sandbox
● Other public data sets: cloud.google.com/public-datasets (Google Cloud),
research.google/tools/datasets (Google Research), and Kaggle (kaggle.com)
● COVID-19
○ How to use our data sets (read blog post)
○ JHU Coronavirus COVID-19 Global Cases data set
○ List of all COVID-19 data sets
● Cloud Life Sciences API: cloud.google.com/life-sciences (read blog post)
● Cloud Healthcare API: cloud.google.com/healthcare (read blog post)
BigQuery & public data sets
Spring 2020
43. Running Code: Compute Engine
>
Google Compute Engine
cloud
Machine Learning: Cloud TPUs
Google Cloud TPU API
cloud
44. Running Code: App Engine
Google App Engine
we
>
cloud
Running Code: Cloud Functions
Google Cloud Functions
cloud
firebase
45. Running Code: Cloud Run
Google Cloud Run
cloud
Storing Data: Cloud Storage & Cloud Filestore
cloud
cloud
48. G Suite: Google Sheets
Sheets API
developers
Try our Node.js customized reporting tool codelab:
g.co/codelabs/sheets
Why use the Sheets API?
data visualization
customized reports
Sheets as a data source
49. Migrate SQL data to a Sheet
# read SQL data then create new spreadsheet & add rows into it
FIELDS = ('ID', 'Customer Name', 'Product Code',
'Units Ordered', 'Unit Price', 'Status')
cxn = sqlite3.connect('db.sqlite')
cur = cxn.cursor()
rows = cur.execute('SELECT * FROM orders').fetchall()
cxn.close()
rows.insert(0, FIELDS)
DATA = {'properties': {'title': 'Customer orders'}}
SHEET_ID = SHEETS.spreadsheets().create(body=DATA,
fields='spreadsheetId').execute().get('spreadsheetId')
SHEETS.spreadsheets().values().update(spreadsheetId=SHEET_ID, range='A1',
body={'values': rows}, valueInputOption='RAW').execute()
Migrate SQL data
to Sheets
goo.gl/N1RPwC
G Suite: Google Slides
Slide API
create
manage
developers
50. Try our Node.js BigQuery GitHub license analyzer codelab:
g.co/codelabs/slides
Why use the Slides API?
data visualization
presentable reports
Try our Node.js Markdown-to-Google-Slides generator:
github.com/gsuitedevs/md2googleslides
Why use the Slides API?
customized presentations
54. What can you do with this?
Accessing maps from
spreadsheets?!?
goo.gl/oAzBN9
This… with help from Google Maps & Gmail
function sendMap() {
var sheet = SpreadsheetApp.getActiveSheet();
var address = sheet.getRange("A2").getValue();
var map = Maps.newStaticMap().addMarker(address);
GmailApp.sendEmail('friend@example.com', 'Map',
'See below.', {attachments:[map]});
}
JS
Codelab and open source repo at
g.co/codelabs/apps-script-intro
55. ● Analyze sentiment in
Google Docs
● Use simple API call to
Natural Language API
● Call with Apps Script
UrlFetch service
● Build this app yourself at
g.co/codelabs/nlp-docs
Analyzing sentiment in Google Docs
56. [simple API/API key sample]
Analyzing sentiment in Google Docs
function getSentiment(text) {
var apiKey = YOUR_API_KEY;
var apiEndpoint =
'https://language.googleapis.com/v1/documents:analyzeSentiment?key=' + apiKey;
// NL API metadata JSON object
var nlData = {
document: {
language: 'en',
type: 'PLAIN_TEXT',
content: text
},
encodingType: 'UTF8'
};
[simple API/API key sample]
Analyzing sentiment in Google Docs
// Create API payload
var nlOptions = {
method: 'POST',
contentType: 'application/json',
payload: JSON.stringify(nlData)
};
// Make API call via UrlFetch (when no object available)
var response = UrlFetchApp.fetch(apiEndpoint, nlOptions);
var data = JSON.parse(response);
var sentiment = 0.0;
if (data && data.documentSentiment && data.documentSentiment.score) {
sentiment = data.documentSentiment.score;
}
return sentiment;
}
57. ● Extend functionality of G Suite editors
● Embed your app within ours!
● 2014: Google Docs, Sheets, Forms
● 2017 Q3: Google Slides
● 2017 Q4: Gmail
● 2018 Q1: Hangouts Chat bots
● Apps Script also powers Google Data
Studio community connectors, and
Google Ads scripts
Apps Script powers add-ons… and more!
6 All of Cloud
(inspiration)
Build powerful solutions with both
GCP and G Suite
59. Gmail message processing with GCP
Gmail
Cloud
Pub/Sub
Cloud
Functions
Cloud
Vision
G Suite GCP
Star
message
Message
notification
Trigger
function
Extract
images
Categorize
images
Inbox augmented with Cloud Function
60. ● Gmail API: sets up notification forwarding to Cloud Pub/Sub
● developers.google.com/gmail/api/guides/push
● Pub/Sub: triggers logic hosted by Cloud Functions
● cloud.google.com/functions/docs/calling/pubsub
● Cloud Functions: "orchestrator" accessing GCP (and G Suite) APIs
● Combine all of the above to add custom intelligence to Gmail
● Deep dive code blog post
● cloud.google.com/blog/products/application-development/
adding-custom-intelligence-to-gmail-with-serverless-on-gcp
● Application source code
● github.com/GoogleCloudPlatform/cloud-functions-gmail-nodejs
App summary
Big data analysis to slide presentation
Access GCP tools from G Suite
64. Supercharge G Suite with GCP
G Suite GCP
BigQuery
Apps Script
Slides Sheets
Application
request
Big data
analytics
65. App summary
● Leverage GCP and build the "final mile" with G Suite
● Driven by Google Apps Script
● Google BigQuery for data analysis
● Google Sheets for visualization
● Google Slides for presentable results
● "Glued" together w/G Suite serverless
● Build this app (codelab): g.co/codelabs/bigquery-sheets-slides
● Video and blog post: bit.ly/2OcptaG
● Application source code: github.com/googlecodelabs/bigquery-sheets-slides
● Presented at Google Cloud NEXT (Jul 2018 [DEV229] & Apr 2019 [DEV212])
● cloud.withgoogle.com/next18/sf/sessions/session/156878
● cloud.withgoogle.com/next/sf/sessions?session=DEV212
Cloud image processing workflow
Archive and analyze G Suite images with GCP
68. Cloud
Vision
G Suite GCP
Cloud image processing workflow
Cloud
Storage
Drive
Sheets
Archive
image
Categorize
image
Record
results
69. Image processing workflow
def drive_get_file(fname):
rsp = DRIVE.files().list(q="name='%s'" % fname).execute().get['files'][0]
fileId, fname, mtype = rsp['id'], rsp['name'], rsp['mimeType']
blob = DRIVE.files().get_media(fileId).execute()
return fname, mtype, rsp['modifiedTime'], blob
def gcs_blob_upload(fname, bucket, blob, mimetype):
body = {'name': fname, 'uploadType': 'multipart',
'contentType': mimetype}
return GCS.objects().insert(bucket, body, blob).execute()
def vision_label_img(img, top):
body = {'requests': [{'image': {'content': img}, 'features':
[{'type': 'LABEL_DETECTION', 'maxResults': top}]}]}
rsp = VISION.images().annotate(
body=body).execute().get['responses'][0]
return ', '.join('%s (%.2f%%)' % (label['description'],
label['score']*100.) for label in rsp['labelAnnotations'])
def sheet_append_rows(sheet, rows):
rsp = SHEETS.spreadsheets().values().append(
spreadsheetId=sheet, range='Sheet1',
body={'values': rows}).execute()
return rsp.get('updates').get('updatedCells')
def main(fname, bucket, sheet_id, folder, top):
fname, mtype, ftime, data = drive_get_img(fname)
gcs_blob_upload(fname, bucket, data, mtype)
vision_label_img(data, top)
sheet_append_row([sheet_id, fname, mtype,
ftime, len(data), rsp])
● Primary goal: Imagine business scenario to analyze images in G Suite
● Other goals: free-up highly-utilized resource, archive data to
colder/cheaper storage, analyze images, generate report for mgmt
● Download image binary from Google Drive
● Upload object to Cloud Storage bucket
● Send payload for analysis by Cloud Vision
● Write back-up location & analysis results into Google Sheets
● Codelab: self-paced (1+-hour) hands-on tutorial
● g.co/codelabs/drive-gcs-vision-sheets
● Application source code
● github.com/googlecodelabs/analyze_gsimg
App summary
70. 7 Wrap-up
Summary and resources
Machine learning session summary
● What is machine learning again?
○ Solving harder problems by making computers smarter
○ "Using data to answer questions.” ~Yufeng Guo, Google Cloud
● How do you machine learning again?
○ Collect lots of data
○ Build and train your model then validate it
○ Use your model to make predictions on new data
● Do you need lots of machine learning experience to get started?
○ No: use pre-trained models available via APIs
○ No: need to do training? Consider using AutoML APIs
○ Build your experience then use standard OSS library when ready
71. Google APIs and ML resources
● G Suite, Google Apps Script docs, videos, repos
○ developers.google.com/gsuite
○ github.com/gsuitedevs
○ goo.gl/JpBQ40
● Google Cloud Platform (GCP) documentation & open source repos
○ cloud.google.com/products/ai/building-blocks
○ github.com/GoogleCloudPlatform
○ youtube.com/GoogleCloudPlatform
● Your next steps…
○ Customize our ML APIs with AutoML: cloud.google.com/automl
○ Get hands-on with a Cloud ML codelab (self-paced tutorial): gcplab.me
○ Explore: Kaggle (kaggle.com) and Colab (colab.research.google.com)
Other Google APIs & platforms
● Firebase (mobile development platform + RT DB)
○ firebase.google.com
● Google Data Studio (data visualization, dashboards, etc.)
○ datastudio.google.com/overview
○ goo.gle/datastudio-course
● Actions on Google/Assistant/DialogFlow (voice apps)
○ developers.google.com/actions
● YouTube (Data, Analytics, and Livestreaming APIs)
○ developers.google.com/youtube
● Google Maps (Maps, Routes, and Places APIs)
○ developers.google.com/maps
● Flutter (native apps [Android, iOS, web] w/1 code base[!])
○ flutter.dev
72. FYI and FYA (if you/your students love comics)
cloud.google.com/products/ai/ml-comic-[12]
... ...
Thank you!
Wesley Chun
@wescpy
Progress bars: goo.gl/69EJVw