SlideShare une entreprise Scribd logo
1  sur  165
OR:


fog
       HOW I
      LEARNED
      TO STOP
      WORRYING
       AND
      LOVE THE

      CLOUD
geemus (Wesley Beary)
web: github.com/geemus
twitter: @geemus
employer and sponsor
web: engineyard.com
twitter: @engineyard
API driven on demand services

CLOUD   core: compute, dns, storage
        also: kvs, load balance, ...
What?
What?
on demand -   only pay for what you actually use
What?
on demand -       only pay for what you actually use

flexible -   add and remove resources in minutes (instead of weeks)
What?
on demand -       only pay for what you actually use

flexible -   add and remove resources in minutes (instead of weeks)

repeatable -      code, test, repeat
What?
on demand -         only pay for what you actually use

flexible -     add and remove resources in minutes (instead of weeks)

repeatable -        code, test, repeat

resilient -   build better systems with transient resources
Why Worry?
Why Worry?
option overload -   which provider/service should I use
Why Worry?
option overload -        which provider/service should I use

expertise -   each service has yet another knowledge silo
Why Worry?
option overload -             which provider/service should I use

expertise -       each service has yet another knowledge silo

tools -   vastly different API, quality, maintenance, etc
Why Worry?
option overload -             which provider/service should I use

expertise -       each service has yet another knowledge silo

tools -   vastly different API, quality, maintenance, etc

standards -        slow progress and differing interpretations
Ruby cloud services
web: github.com/geemus/fog
twitter: @fog
Why?
Why?
portable -   AWS, Bluebox, Brightbox, Google, Rackspace, Slicehost, Terremark, ...
Why?
portable -   AWS, Bluebox, Brightbox, Google, Rackspace, Slicehost, Terremark, ...

powerful -   compute, dns, storage, collections, models, mocks, requests, ...
Why?
portable -   AWS, Bluebox, Brightbox, Google, Rackspace, Slicehost, Terremark, ...

powerful -   compute, dns, storage, collections, models, mocks, requests, ...

established -    92k downloads, 1112 followers, 141 forks, 67 contributors, me, ...
Why?
portable -   AWS, Bluebox, Brightbox, Google, Rackspace, Slicehost, Terremark, ...

powerful -   compute, dns, storage, collections, models, mocks, requests, ...

established -    92k downloads, 1112 followers, 141 forks, 67 contributors, me, ...

Fog.mock! -     faster, cheaper, simulated cloud behavior
Who?
Who?
libraries -   carrierwave, chef, deckard, gaff, gemcutter, ...

products -    DevStructure, Engine Yard, iSwifter, OpenFeint, RowFeeder, ...
Interactive Bit!
Interactive Bit!
      cloud
Interactive Bit!
      cloud
       fog
What?
What?
That’s great and all but   I don’t have a use case...
What?
That’s great and all but   I don’t have a use case...
uptime -       because who wants a busted web site?
Setup
  geymus ~ ⌘ gem install fog
              or
geymus ~ ⌘ sudo gem install fog
Get Connected



7 # setup a connection to the service
8 compute = Fog::Compute.new(credentials)
Get Connected
1   credentials = {
2     :provider           => 'Rackspace',
3     :rackspace_api_key  => RACKSPACE_API_KEY,
4     :rackspace_username => RACKSPACE_USERNAME
5   }
6
7 # setup a connection to the service
8 compute = Fog::Compute.new(credentials)
Boot that Server
 1   server_data = compute.create_server(
 2     1,
 3     49
 4   ).body['server']
 5
Boot that Server
 1   server_data = compute.create_server(
 2     1,
 3     49
 4   ).body['server']
 5
 6    until compute.get_server_details(
 7      server_data['id']
 8    ).body['server']['status'] == 'ACTIVE'
 9    end
10
Boot that Server
 1   server_data = compute.create_server(
 2     1,
 3     49
 4   ).body['server']
 5

 6      until compute.get_server_details(
 7        server_data['id']
 8      ).body['server']['status'] == 'ACTIVE'
 9      end
10
11         commands = [
12           %{'mkdir .ssh'},
13           %{'echo #{File.read('~/.ssh/id_rsa.pub')} >> ~/.ssh/authorized_keys'},
14           %{passwd -l root},
15         ]
16
17         Net::SSH.start(
18           server_data['addresses'].first,
19           'root',
20           :password => server_data['password']
21         ) do |ssh|
22           commands.each do |command|
23             ssh.open_channel do |ssh_channel|
24               ssh_channel.request_pty
25               ssh_channel.exec(%{bash -lc '#{command}'})
26               ssh.loop
27             end
28           end
29         end
Worry!
Worry!
arguments -   what goes where, what does it mean?
Worry!
arguments -     what goes where, what does it mean?

portability -   most of this will only work on Rackspace
Worry!
arguments -      what goes where, what does it mean?

portability -   most of this will only work on Rackspace

disservice -    back to square one, but with tools in hand
Bootstrap



7 # boot server and setup ssh keys
8 server = compute.servers.bootstrap(server_attributes)
Bootstrap
1   server_attributes = {
2     :image_id         => '49',
3     :private_key_path => PRIVATE_KEY_PATH,
4     :public_key_path  => PUBLIC_KEY_PATH
5   }
6
7 # boot server and setup ssh keys
8 server = compute.servers.bootstrap(server_attributes)
Servers?
Servers?
1 compute.servers # list servers, same as #all
Servers?
1 compute.servers # list servers, same as #all
2
3 compute.servers.get(1234567890) # server by id
Servers?
1 compute.servers # list servers, same as #all
2
3 compute.servers.get(1234567890) # server by id
4
5 compute.servers.reload # update to latest
Servers?
1   compute.servers # list servers, same as #all
2
3   compute.servers.get(1234567890) # server by id
4
5   compute.servers.reload # update to latest
6
7   compute.servers.new(attributes) # local model
Servers?
1   compute.servers # list servers, same as #all
2
3   compute.servers.get(1234567890) # server by id
4
5   compute.servers.reload # update to latest
6
7   compute.servers.new(attributes) # local model
8
9   compute.servers.create(attributes) # remote model
ping
ping
1 # ping target 10 times
2 ssh_results = server.ssh("ping -c 10 #{target}")
ping
1 # ping target 10 times
2 ssh_results = server.ssh("ping -c 10 #{target}")
3 stdout = ssh_results.first.stdout
ping
1   # ping target 10 times
2   ssh_results = server.ssh("ping -c 10 #{target}")
3   stdout = ssh_results.first.stdout
4
5   # parse result, last line is summary
6   # round-trip min/avg/max/stddev = A.A/B.B/C.C/D.D ms
ping
1   # ping target 10 times
2   ssh_results = server.ssh("ping -c 10 #{target}")
3   stdout = ssh_results.first.stdout
4
5   # parse result, last line is summary
6   # round-trip min/avg/max/stddev = A.A/B.B/C.C/D.D ms
7   stats = stdout.split("/n").last.split(' ')[-2]
8   min, avg, max, stddev = stats.split('/')
ping
1   # ping target 10 times
2   ssh_results = server.ssh("ping -c 10 #{target}")
3   stdout = ssh_results.first.stdout
4
5   # parse result, last line is summary
6   # round-trip min/avg/max/stddev = A.A/B.B/C.C/D.D ms
7   stats = stdout.split("/n").last.split(' ')[-2]
8   min, avg, max, stddev = stats.split('/')

          NOTE: most complex code was string parsing!?!
cleanup
cleanup
 1 # shutdown the server
 2 server.destroy
cleanup
 1 # shutdown the server
 2 server.destroy
 3
 4 # return the data as a hash
 5 {
 6   :min    => min,
 7   :avg    => avg,
 8   :max    => max,
 9   :stddev => stddev
10 }
Next!
Next!
 1   -server_data = compute.create_server(
 2   +compute.import_key_pair(
 3   +  'id_rsa.pub',
 4   +  File.read('~/.ssh/id_rsa.pub')
 5   +)
 6   +
 7   +compute.authorize_security_group_ingress(
 8   +  'CidrIp'      => '0.0.0.0/0',
 9   +  'FromPort'    => 22,
10   +  'IpProtocol'  => 'tcp',
11   +  'GroupName'   => 'default',
12   +  'ToPort'      => 22
13   +)
14   +
15   +server_data = compute.run_instances(
16   +  'ami-1a837773',
17      1,
18   -  49
19   -).body['server']
20   +  1,
21   +  'InstanceType'  => 'm1.small',
22   +  'KeyName'       => 'id_rsa.pub',
23   +  'SecurityGroup' => 'default'
24   +).body['instancesSet'].first
25    
26   -until compute.get_server_details(
27   -  server_data['id']
28   -).body['server']['status'] == 'ACTIVE'
29   +until compute.describe_instances(
30   +  'instance-id' => server_data['instanceId']
31   +).body['reservationSet'].first['instancesSet'].first['instanceState']['name'] == 'running'
32    end
33    
34   +sleep(300)
35   +
36    Net::SSH.start(
37   -  server_data['addresses'].first,
38   -  'root',
39   -  :password => server_data['password']
40   +  server_data['ipAddress'],
41   +  'ubuntu',
42   +  :key_data => [File.read('~/.ssh/id_rsa')]
43    ) do |ssh|
44      commands = [
45        %{'mkdir .ssh'},
Next!
 1   -server_data = compute.create_server(
 2   +compute.import_key_pair(
 3   +  'id_rsa.pub',
 4   +  File.read('~/.ssh/id_rsa.pub')
 5   +)
 6   +
 7   +compute.authorize_security_group_ingress(
 8   +  'CidrIp'      => '0.0.0.0/0',
 9   +  'FromPort'    => 22,
10   +  'IpProtocol'  => 'tcp',
11   +  'GroupName'   => 'default',
12   +  'ToPort'      => 22
13   +)
14   +
15   +server_data = compute.run_instances(
16   +  'ami-1a837773',
17      1,
18   -  49
19   -).body['server']
20   +  1,
21   +  'InstanceType'  => 'm1.small',
22   +  'KeyName'       => 'id_rsa.pub',
23   +  'SecurityGroup' => 'default'
24   +).body['instancesSet'].first
25    
26   -until compute.get_server_details(
27   -  server_data['id']
28   -).body['server']['status'] == 'ACTIVE'
29   +until compute.describe_instances(
30   +  'instance-id' => server_data['instanceId']
31   +).body['reservationSet'].first['instancesSet'].first['instanceState']['name'] == 'running'
32    end
33    
34   +sleep(300)
35   +
36    Net::SSH.start(
37   -  server_data['addresses'].first,
38   -  'root',
39   -  :password => server_data['password']
40   +  server_data['ipAddress'],
41   +  'ubuntu',
42   +  :key_data => [File.read('~/.ssh/id_rsa')]
43    ) do |ssh|
44      commands = [
45        %{'mkdir .ssh'},
geopinging v1
 1 # specify a different provider
 2 credentials = {
 3   :provider           => 'AWS',
 4   :aws_access_key_id  => AWS_ACCESS_KEY_ID,
 5   :aws_secret_access_key => AWS_SECRET_ACCESS_KEY
 6 }
 7
 8 server_attributes = {
 9   :image_id         => 'ami-1a837773',
10   :private_key_path => PRIVATE_KEY_PATH,
11   :public_key_path  => PUBLIC_KEY_PATH,
12   :username         => 'ubuntu'
13 }
geopinging v2
1   # specify a different aws region
2   # ['ap-southeast-1', 'eu-west-1', 'us-west-1]
3   credentials.merge!({
4     :region => 'eu-west-1'
5   })
geopinging v...
portable -   AWS, Bluebox, Brightbox, Rackspace, Slicehost, Terremark, ...
geopinging v...
portable -   AWS, Bluebox, Brightbox, Rackspace, Slicehost, Terremark, ...

                 lather, rinse, repeat
How?
That is awesome, but   how did you...
exploring
geymus ~ ⌘ fog

  To run as 'default', add the following to ~/.fog

:default:
  :aws_access_key_id:     INTENTIONALLY_LEFT_BLANK
  :aws_secret_access_key: INTENTIONALLY_LEFT_BLANK
  :public_key_path:       INTENTIONALLY_LEFT_BLANK
  :private_key_path:      INTENTIONALLY_LEFT_BLANK
  :rackspace_api_key:     INTENTIONALLY_LEFT_BLANK
  :rackspace_username:    INTENTIONALLY_LEFT_BLANK
  ...
sign posts
sign posts
geymus ~ ⌘ fog
sign posts
geymus ~ ⌘ fog
 Welcome to fog interactive!
 :default credentials provide AWS and Rackspace
sign posts
geymus ~ ⌘ fog
 Welcome to fog interactive!
 :default credentials provide AWS and Rackspace
>> providers
sign posts
geymus ~ ⌘ fog
 Welcome to fog interactive!
 :default credentials provide AWS and Rackspace
>> providers
[AWS, Rackspace]
sign posts
geymus ~ ⌘ fog
 Welcome to fog interactive!
 :default credentials provide AWS and Rackspace
>> providers
[AWS, Rackspace]
>> Rackspace.collections
sign posts
geymus ~ ⌘ fog
  Welcome to fog interactive!
  :default credentials provide AWS and Rackspace
>> providers
[AWS, Rackspace]
>> Rackspace.collections
[:directories, :files, :flavors, :images, :servers]
sign posts
geymus ~ ⌘ fog
  Welcome to fog interactive!
  :default credentials provide AWS and Rackspace
>> providers
[AWS, Rackspace]
>> Rackspace.collections
[:directories, :files, :flavors, :images, :servers]
>> Rackspace[:compute]
sign posts
geymus ~ ⌘ fog
  Welcome to fog interactive!
  :default credentials provide AWS and Rackspace
>> providers
[AWS, Rackspace]
>> Rackspace.collections
[:directories, :files, :flavors, :images, :servers]
>> Rackspace[:compute]
#<Fog::Rackspace::Compute ...>
sign posts
geymus ~ ⌘ fog
  Welcome to fog interactive!
  :default credentials provide AWS and Rackspace
>> providers
[AWS, Rackspace]
>> Rackspace.collections
[:directories, :files, :flavors, :images, :servers]
>> Rackspace[:compute]
#<Fog::Rackspace::Compute ...>
>> Rackspace[:compute].requests
sign posts
geymus ~ ⌘ fog
  Welcome to fog interactive!
  :default credentials provide AWS and Rackspace
>> providers
[AWS, Rackspace]
>> Rackspace.collections
[:directories, :files, :flavors, :images, :servers]
>> Rackspace[:compute]
#<Fog::Rackspace::Compute ...>
>> Rackspace[:compute].requests
[:confirm_resized_server, ..., :update_server]
what are those?
what are those?
provider   => [AWS, Rackspace, Zerigo, ...]
what are those?
provider   => [AWS, Rackspace, Zerigo, ...]

 service   => [Compute, DNS, Storage, ...]
what are those?
 provider    => [AWS, Rackspace, Zerigo, ...]

  service    => [Compute, DNS, Storage, ...]

collection   => [flavors, images, servers, ...]
what are those?
 provider    => [AWS, Rackspace, Zerigo, ...]

  service    => [Compute, DNS, Storage, ...]

collection   => [flavors, images, servers, ...]

   model     => [flavor, image, server, ...]
what are those?
 provider    => [AWS, Rackspace, Zerigo, ...]

  service    => [Compute, DNS, Storage, ...]

collection   => [flavors, images, servers, ...]

   model     => [flavor, image, server, ...]

  request    => [describe_instances, run_instances, ...]
requests?
requests?
>> Rackspace[:compute].list_servers
requests?
>> Rackspace[:compute].list_servers
#<Excon::Response:0x________
requests?
>> Rackspace[:compute].list_servers
#<Excon::Response:0x________
@body = {
  "servers" => []
},
requests?
>> Rackspace[:compute].list_servers
#<Excon::Response:0x________
@body = {
  "servers" => []
},
@headers = {
  "X-PURGE-KEY"=>"/______/servers",
  ...,
  "Connection"=>"keep-alive"
},
requests?
>> Rackspace[:compute].list_servers
#<Excon::Response:0x________
@body = {
  "servers" => []
},
@headers = {
  "X-PURGE-KEY"=>"/______/servers",
  ...,
  "Connection"=>"keep-alive"
},
@status=200>
sanity check
sanity check
>> Rackspace.servers.select {|server| server.ready?}
sanity check
>> Rackspace.servers.select {|server| server.ready?}
 <Fog::Rackspace::Compute::Servers
  filters={}
  []
 >
sanity check
>> Rackspace.servers.select {|server| server.ready?}
 <Fog::Rackspace::Compute::Servers
  filters={}
  []
 >
>> AWS.servers.select {|server| server.ready?}
sanity check
>> Rackspace.servers.select {|server| server.ready?}
 <Fog::Rackspace::Compute::Servers
  filters={}
  []
 >
>> AWS.servers.select {|server| server.ready?}
 <Fog::AWS::Compute::Servers
  []
 >
sanity check
>> Rackspace.servers.select {|server| server.ready?}
 <Fog::Rackspace::Compute::Servers
  filters={}
  []
 >
>> AWS.servers.select {|server| server.ready?}
 <Fog::AWS::Compute::Servers
  []
 >
>> exit
finding images
finding images
>> Rackspace.images.table([:id, :name])
finding images
>> Rackspace.images.table([:id, :name])
 +---------
+------------------------------------------+
 | id  | name                         |
 +---------
+------------------------------------------+
 | 49   | Ubuntu 10.04 LTS (lucid)      |
 +---------
+------------------------------------------+
 ...
finding images
>> Rackspace.images.table([:id, :name])
 +---------
+------------------------------------------+
 | id  | name                          |
 +---------
+------------------------------------------+
 | 49   | Ubuntu 10.04 LTS (lucid)        |
 +---------
+------------------------------------------+
 ...
>> AWS.images # I use alestic.com listing
finding images
>> Rackspace.images.table([:id, :name])
  +---------
+------------------------------------------+
  | id  | name                         |
  +---------
+------------------------------------------+
  | 49   | Ubuntu 10.04 LTS (lucid)       |
  +---------
+------------------------------------------+
  ...
>> AWS.images # I use alestic.com listing
...
exploring...
exploring...
  It takes   forever!
exploring...
    It takes   forever!
  It’s so   expensive!
exploring...
        It takes   forever!
      It’s so   expensive!
A warm welcome for   Fog.mock!
Mocks!
geymus ~ ⌘ FOG_MOCK=true fog
             or
       require ‘fog’
       Fog.mock!
simulation
simulation
Most functions   just work!
simulation
         Most functions   just work!
Unimplemented mocks?   Errors   keep you on track.
simulation
                   Most functions   just work!
      Unimplemented mocks?       Errors      keep you on track.

Tests run against both, so it is either   consistent    or a   bug.
Back to Business
  I have a bunch of data   now what?
Back to Business
  I have a bunch of data   now what?
   storage -       aggregating cloud data
Get Connected



7 # setup a connection to the service
8 storage = Fog::Storage.new(credentials)
Get Connected
1   credentials = {
2     :provider        => 'AWS',
3     :aws_access_key_id  => AWS_ACCESS_KEY_ID,
4     :aws_secret_access_key => AWS_SECRET_ACCESS_KEY
5   }
6
7 # setup a connection to the service
8 storage = Fog::Storage.new(credentials)
directories
1   # create a directory
2   directory = storage.directories.create(
3     :key    => directory_name,
4     :public => true
5   )
files
files
1   # store the file
2   file = directory.files.create(
3     :body   => File.open(path),
4     :key    => name,
5     :public => true
6   )
files
1   # store the file
2   file = directory.files.create(
3     :body   => File.open(path),
4     :key    => name,
5     :public => true
6   )
7
8   # return the public url for the file
9   file.public_url
geostorage
 1   # specify a different provider
 2   credentials = {
 3     :provider           => 'Rackspace',
 4     :rackspace_api_key  => RACKSPACE_API_KEY,
 5     :rackspace_username => RACKSPACE_USERNAME
 6   }
cleanup
cleanup
geymus ~ ⌘ fog
...
cleanup
geymus ~ ⌘ fog
...
>> directory = AWS.directories.get(DIRECTORY_NAME)
...
cleanup
geymus ~ ⌘ fog
...
>> directory = AWS.directories.get(DIRECTORY_NAME)
...
>> directory.files.each {|file| file.destroy}
...
cleanup
geymus ~ ⌘ fog
...
>> directory = AWS.directories.get(DIRECTORY_NAME)
...
>> directory.files.each {|file| file.destroy}
...
>> directory.destroy
...
cleanup
geymus ~ ⌘ fog
...
>> directory = AWS.directories.get(DIRECTORY_NAME)
...
>> directory.files.each {|file| file.destroy}
...
>> directory.destroy
...
>> exit
geoaggregating
portable -   AWS, Google, Local, Rackspace
geoaggregating
portable -   AWS, Google, Local, Rackspace

    lather, rinse, repeat
Phase 3: Profit
I’ve got the data, but how do I   freemium?
Phase 3: Profit
I’ve got the data, but how do I   freemium?
 dns -    make your cloud (premium) accessible
Get Connected



7 # setup a connection to the service
8 dns = Fog::DNS.new(credentials)
Get Connected
1   credentials = {
2     :provider  => 'Zerigo',
3     :zerigo_email => ZERIGO_EMAIL,
4     :zerigo_token => ZERIGO_TOKEN
5   }
6
7 # setup a connection to the service
8 dns = Fog::DNS.new(credentials)
zones
1   # create a zone
2   zone = dns.zone.create(
3     :domain => domain_name,
4     :email  => "admin@#{domain_name}"
5   )
records
1   # create a record
2   record = zones.records.create(
3     :ip   => '1.2.3.4',
4     :name => "#{customer_name}.#{domain_name}",
5     :type => 'A'
6   )
cleanup
geymus ~ ⌘ fog
...
>> zone = Zerigo.zones.get(ZONE_ID)
...
>> zone.records.each {|record| record.destroy}
...
>> zone.destroy
...
>> exit
geofreemiuming
portable -   AWS, Linode, Slicehost, Zerigo
geofreemiuming
portable -   AWS, Linode, Slicehost, Zerigo

    lather, rinse, repeat
Congratulations!
Congratulations!
todo -   copy/paste, push, deploy!
Congratulations!
todo -   copy/paste, push, deploy!

budgeting -       find ways to spend your pile of money
Congratulations!
todo -   copy/paste, push, deploy!

budgeting -       find ways to spend your pile of money

geemus -     likes coffee, bourbon, games, etc
Congratulations!
todo -   copy/paste, push, deploy!

budgeting -        find ways to spend your pile of money

geemus -       likes coffee, bourbon, games, etc

retire -   at your earliest convenience
Love!
Love!
knowledge -   suffering encoded in ruby
Love!
knowledge -   expertise encoded in ruby
Love!
knowledge -   expertise encoded in ruby

empowering -    show the cloud who is boss
Love!
knowledge -        expertise encoded in ruby

empowering -          show the cloud who is boss

exciting -   this is some cutting edge stuff!
Homework: Easy
Homework: Easy
  follow   @fog   to hear about releases
Homework: Easy
            follow   @fog   to hear about releases

follow   github.com/geemus/fog                to hear nitty gritty
Homework: Easy
             follow    @fog   to hear about releases

follow   github.com/geemus/fog                  to hear nitty gritty

     proudly display   stickers   wherever hackers are found
Homework: Easy
             follow    @fog   to hear about releases

follow   github.com/geemus/fog                  to hear nitty gritty

     proudly display   stickers   wherever hackers are found

           ask   geemus       your remaining questions
Homework: Easy
             follow     @fog   to hear about releases

follow   github.com/geemus/fog                   to hear nitty gritty

     proudly display    stickers   wherever hackers are found

           ask   geemus        your remaining questions

                 play   games    with   geemus
Homework: Normal
Homework: Normal
report issues at   github.com/geemus/fog/issues
Homework: Normal
report issues at   github.com/geemus/fog/issues
                   irc   #ruby-fog   on   freenode
Homework: Normal
report issues at   github.com/geemus/fog/issues
                   irc   #ruby-fog   on   freenode
discuss   groups.google.com/group/ruby-fog
Homework: Normal
report issues at   github.com/geemus/fog/issues
                   irc   #ruby-fog    on   freenode
discuss   groups.google.com/group/ruby-fog
                           write   blog posts
Homework: Normal
report issues at   github.com/geemus/fog/issues
                   irc   #ruby-fog    on   freenode
discuss   groups.google.com/group/ruby-fog
                           write   blog posts
                         give   lightning talks
Homework: Hard
Homework: Hard
help make   fog.io   the cloud services resource for ruby
Homework: Hard
help make   fog.io   the cloud services resource for ruby

send   pull requests      fixing issues or adding features
Homework: Hard
        help make   fog.io      the cloud services resource for ruby

        send   pull requests         fixing issues or adding features

proudly wear contributor-only   grey shirt      wherever hackers are found
Homework: Expert
Homework: Expert
 help   maintain   the cloud services you depend on
Homework: Expert
     help   maintain   the cloud services you depend on

become a    collaborator    by keeping informed and involved
Homework: Expert
         help   maintain     the cloud services you depend on

  become a      collaborator      by keeping informed and involved

proudly wear commit-only   black shirt     wherever hackers are found
Thanks!



@geemus - questions, comments, suggestions
Thanks! Questions?
                 (see also: README)

examples - http://gist.github.com/729992
   slides - http://slidesha.re/hR8sP9
    repo - http://github.com/geemus/fog
    bugs - http://github.com/geemus/fog/issues
@geemus - questions, comments, suggestions

Contenu connexe

Tendances

Deployment with Fabric
Deployment with FabricDeployment with Fabric
Deployment with Fabric
andymccurdy
 

Tendances (20)

Capistrano, Puppet, and Chef
Capistrano, Puppet, and ChefCapistrano, Puppet, and Chef
Capistrano, Puppet, and Chef
 
A Fabric/Puppet Build/Deploy System
A Fabric/Puppet Build/Deploy SystemA Fabric/Puppet Build/Deploy System
A Fabric/Puppet Build/Deploy System
 
Introduction to Ansible
Introduction to AnsibleIntroduction to Ansible
Introduction to Ansible
 
docker build with Ansible
docker build with Ansibledocker build with Ansible
docker build with Ansible
 
Using Ansible Dynamic Inventory with Amazon EC2
Using Ansible Dynamic Inventory with Amazon EC2Using Ansible Dynamic Inventory with Amazon EC2
Using Ansible Dynamic Inventory with Amazon EC2
 
Ansible with AWS
Ansible with AWSAnsible with AWS
Ansible with AWS
 
Managing Your Cisco Datacenter Network with Ansible
Managing Your Cisco Datacenter Network with AnsibleManaging Your Cisco Datacenter Network with Ansible
Managing Your Cisco Datacenter Network with Ansible
 
Ansible is the simplest way to automate. MoldCamp, 2015
Ansible is the simplest way to automate. MoldCamp, 2015Ansible is the simplest way to automate. MoldCamp, 2015
Ansible is the simplest way to automate. MoldCamp, 2015
 
IT Automation with Ansible
IT Automation with AnsibleIT Automation with Ansible
IT Automation with Ansible
 
A quick intro to Ansible
A quick intro to AnsibleA quick intro to Ansible
A quick intro to Ansible
 
Ansible 2.0 - How to use Ansible to automate your applications in AWS.
Ansible 2.0 - How to use Ansible to automate your applications in AWS.Ansible 2.0 - How to use Ansible to automate your applications in AWS.
Ansible 2.0 - How to use Ansible to automate your applications in AWS.
 
Ansible Meetup Hamburg / Quickstart
Ansible Meetup Hamburg / QuickstartAnsible Meetup Hamburg / Quickstart
Ansible Meetup Hamburg / Quickstart
 
Ansible
AnsibleAnsible
Ansible
 
Ansible roles done right
Ansible roles done rightAnsible roles done right
Ansible roles done right
 
CoreOS in a Nutshell
CoreOS in a NutshellCoreOS in a Nutshell
CoreOS in a Nutshell
 
CoreOS + Kubernetes @ All Things Open 2015
CoreOS + Kubernetes @ All Things Open 2015CoreOS + Kubernetes @ All Things Open 2015
CoreOS + Kubernetes @ All Things Open 2015
 
Introducing Ansible
Introducing AnsibleIntroducing Ansible
Introducing Ansible
 
Cyansible
CyansibleCyansible
Cyansible
 
Ansible Crash Course
Ansible Crash CourseAnsible Crash Course
Ansible Crash Course
 
Deployment with Fabric
Deployment with FabricDeployment with Fabric
Deployment with Fabric
 

Similaire à fog or: How I Learned to Stop Worrying and Love the Cloud

Writing robust Node.js applications
Writing robust Node.js applicationsWriting robust Node.js applications
Writing robust Node.js applications
Tom Croucher
 
Stack kicker devopsdays-london-2013
Stack kicker devopsdays-london-2013Stack kicker devopsdays-london-2013
Stack kicker devopsdays-london-2013
Simon McCartney
 
Railsconf2011 deployment tips_for_slideshare
Railsconf2011 deployment tips_for_slideshareRailsconf2011 deployment tips_for_slideshare
Railsconf2011 deployment tips_for_slideshare
tomcopeland
 
Puppet Camp Seattle 2014: Puppet: Cloud Infrastructure as Code
Puppet Camp Seattle 2014: Puppet: Cloud Infrastructure as CodePuppet Camp Seattle 2014: Puppet: Cloud Infrastructure as Code
Puppet Camp Seattle 2014: Puppet: Cloud Infrastructure as Code
Puppet
 
Facebook的缓存系统
Facebook的缓存系统Facebook的缓存系统
Facebook的缓存系统
yiditushe
 
Presentation iv implementasi 802x eap tls peap mscha pv2
Presentation iv implementasi  802x eap tls peap mscha pv2Presentation iv implementasi  802x eap tls peap mscha pv2
Presentation iv implementasi 802x eap tls peap mscha pv2
Hell19
 

Similaire à fog or: How I Learned to Stop Worrying and Love the Cloud (20)

How I Learned to Stop Worrying and Love the Cloud - Wesley Beary, Engine Yard
How I Learned to Stop Worrying and Love the Cloud - Wesley Beary, Engine YardHow I Learned to Stop Worrying and Love the Cloud - Wesley Beary, Engine Yard
How I Learned to Stop Worrying and Love the Cloud - Wesley Beary, Engine Yard
 
Cloud Meetup - Automation in the Cloud
Cloud Meetup - Automation in the CloudCloud Meetup - Automation in the Cloud
Cloud Meetup - Automation in the Cloud
 
Rhebok, High Performance Rack Handler / Rubykaigi 2015
Rhebok, High Performance Rack Handler / Rubykaigi 2015Rhebok, High Performance Rack Handler / Rubykaigi 2015
Rhebok, High Performance Rack Handler / Rubykaigi 2015
 
Cutting through the fog of cloud
Cutting through the fog of cloudCutting through the fog of cloud
Cutting through the fog of cloud
 
Writing robust Node.js applications
Writing robust Node.js applicationsWriting robust Node.js applications
Writing robust Node.js applications
 
Future Decoded - Node.js per sviluppatori .NET
Future Decoded - Node.js per sviluppatori .NETFuture Decoded - Node.js per sviluppatori .NET
Future Decoded - Node.js per sviluppatori .NET
 
Stack kicker devopsdays-london-2013
Stack kicker devopsdays-london-2013Stack kicker devopsdays-london-2013
Stack kicker devopsdays-london-2013
 
Apache MXNet Distributed Training Explained In Depth by Viacheslav Kovalevsky...
Apache MXNet Distributed Training Explained In Depth by Viacheslav Kovalevsky...Apache MXNet Distributed Training Explained In Depth by Viacheslav Kovalevsky...
Apache MXNet Distributed Training Explained In Depth by Viacheslav Kovalevsky...
 
Cooking with Chef
Cooking with ChefCooking with Chef
Cooking with Chef
 
Railsconf2011 deployment tips_for_slideshare
Railsconf2011 deployment tips_for_slideshareRailsconf2011 deployment tips_for_slideshare
Railsconf2011 deployment tips_for_slideshare
 
NGINX Can Do That? Test Drive Your Config File!
NGINX Can Do That? Test Drive Your Config File!NGINX Can Do That? Test Drive Your Config File!
NGINX Can Do That? Test Drive Your Config File!
 
Reusable, composable, battle-tested Terraform modules
Reusable, composable, battle-tested Terraform modulesReusable, composable, battle-tested Terraform modules
Reusable, composable, battle-tested Terraform modules
 
Using Sinatra to Build REST APIs in Ruby
Using Sinatra to Build REST APIs in RubyUsing Sinatra to Build REST APIs in Ruby
Using Sinatra to Build REST APIs in Ruby
 
infra-as-code
infra-as-codeinfra-as-code
infra-as-code
 
Streamline Hadoop DevOps with Apache Ambari
Streamline Hadoop DevOps with Apache AmbariStreamline Hadoop DevOps with Apache Ambari
Streamline Hadoop DevOps with Apache Ambari
 
Otimizando seu projeto Rails
Otimizando seu projeto RailsOtimizando seu projeto Rails
Otimizando seu projeto Rails
 
Burn down the silos! Helping dev and ops gel on high availability websites
Burn down the silos! Helping dev and ops gel on high availability websitesBurn down the silos! Helping dev and ops gel on high availability websites
Burn down the silos! Helping dev and ops gel on high availability websites
 
Puppet Camp Seattle 2014: Puppet: Cloud Infrastructure as Code
Puppet Camp Seattle 2014: Puppet: Cloud Infrastructure as CodePuppet Camp Seattle 2014: Puppet: Cloud Infrastructure as Code
Puppet Camp Seattle 2014: Puppet: Cloud Infrastructure as Code
 
Facebook的缓存系统
Facebook的缓存系统Facebook的缓存系统
Facebook的缓存系统
 
Presentation iv implementasi 802x eap tls peap mscha pv2
Presentation iv implementasi  802x eap tls peap mscha pv2Presentation iv implementasi  802x eap tls peap mscha pv2
Presentation iv implementasi 802x eap tls peap mscha pv2
 

Dernier

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Dernier (20)

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot ModelNavi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 

fog or: How I Learned to Stop Worrying and Love the Cloud

  • 1. OR: fog HOW I LEARNED TO STOP WORRYING AND LOVE THE CLOUD
  • 2. geemus (Wesley Beary) web: github.com/geemus twitter: @geemus
  • 3. employer and sponsor web: engineyard.com twitter: @engineyard
  • 4. API driven on demand services CLOUD core: compute, dns, storage also: kvs, load balance, ...
  • 6. What? on demand - only pay for what you actually use
  • 7. What? on demand - only pay for what you actually use flexible - add and remove resources in minutes (instead of weeks)
  • 8. What? on demand - only pay for what you actually use flexible - add and remove resources in minutes (instead of weeks) repeatable - code, test, repeat
  • 9. What? on demand - only pay for what you actually use flexible - add and remove resources in minutes (instead of weeks) repeatable - code, test, repeat resilient - build better systems with transient resources
  • 11. Why Worry? option overload - which provider/service should I use
  • 12. Why Worry? option overload - which provider/service should I use expertise - each service has yet another knowledge silo
  • 13. Why Worry? option overload - which provider/service should I use expertise - each service has yet another knowledge silo tools - vastly different API, quality, maintenance, etc
  • 14. Why Worry? option overload - which provider/service should I use expertise - each service has yet another knowledge silo tools - vastly different API, quality, maintenance, etc standards - slow progress and differing interpretations
  • 15. Ruby cloud services web: github.com/geemus/fog twitter: @fog
  • 16. Why?
  • 17. Why? portable - AWS, Bluebox, Brightbox, Google, Rackspace, Slicehost, Terremark, ...
  • 18. Why? portable - AWS, Bluebox, Brightbox, Google, Rackspace, Slicehost, Terremark, ... powerful - compute, dns, storage, collections, models, mocks, requests, ...
  • 19. Why? portable - AWS, Bluebox, Brightbox, Google, Rackspace, Slicehost, Terremark, ... powerful - compute, dns, storage, collections, models, mocks, requests, ... established - 92k downloads, 1112 followers, 141 forks, 67 contributors, me, ...
  • 20. Why? portable - AWS, Bluebox, Brightbox, Google, Rackspace, Slicehost, Terremark, ... powerful - compute, dns, storage, collections, models, mocks, requests, ... established - 92k downloads, 1112 followers, 141 forks, 67 contributors, me, ... Fog.mock! - faster, cheaper, simulated cloud behavior
  • 21. Who?
  • 22. Who? libraries - carrierwave, chef, deckard, gaff, gemcutter, ... products - DevStructure, Engine Yard, iSwifter, OpenFeint, RowFeeder, ...
  • 25. Interactive Bit! cloud fog
  • 26. What?
  • 27. What? That’s great and all but I don’t have a use case...
  • 28. What? That’s great and all but I don’t have a use case... uptime - because who wants a busted web site?
  • 29. Setup geymus ~ ⌘ gem install fog or geymus ~ ⌘ sudo gem install fog
  • 30. Get Connected 7 # setup a connection to the service 8 compute = Fog::Compute.new(credentials)
  • 31. Get Connected 1 credentials = { 2   :provider           => 'Rackspace', 3   :rackspace_api_key  => RACKSPACE_API_KEY, 4   :rackspace_username => RACKSPACE_USERNAME 5 } 6 7 # setup a connection to the service 8 compute = Fog::Compute.new(credentials)
  • 32. Boot that Server  1 server_data = compute.create_server(  2   1,  3   49  4 ).body['server']  5
  • 33. Boot that Server  1 server_data = compute.create_server(  2   1,  3   49  4 ).body['server']  5  6 until compute.get_server_details(  7   server_data['id']  8 ).body['server']['status'] == 'ACTIVE'  9 end 10
  • 34. Boot that Server  1 server_data = compute.create_server(  2   1,  3   49  4 ).body['server']  5  6 until compute.get_server_details(  7   server_data['id']  8 ).body['server']['status'] == 'ACTIVE'  9 end 10 11 commands = [ 12   %{'mkdir .ssh'}, 13   %{'echo #{File.read('~/.ssh/id_rsa.pub')} >> ~/.ssh/authorized_keys'}, 14   %{passwd -l root}, 15 ] 16 17 Net::SSH.start( 18   server_data['addresses'].first, 19   'root', 20   :password => server_data['password'] 21 ) do |ssh| 22   commands.each do |command| 23     ssh.open_channel do |ssh_channel| 24       ssh_channel.request_pty 25       ssh_channel.exec(%{bash -lc '#{command}'}) 26       ssh.loop 27     end 28   end 29 end
  • 36. Worry! arguments - what goes where, what does it mean?
  • 37. Worry! arguments - what goes where, what does it mean? portability - most of this will only work on Rackspace
  • 38. Worry! arguments - what goes where, what does it mean? portability - most of this will only work on Rackspace disservice - back to square one, but with tools in hand
  • 39. Bootstrap 7 # boot server and setup ssh keys 8 server = compute.servers.bootstrap(server_attributes)
  • 40. Bootstrap 1 server_attributes = { 2   :image_id         => '49', 3   :private_key_path => PRIVATE_KEY_PATH, 4   :public_key_path  => PUBLIC_KEY_PATH 5 } 6 7 # boot server and setup ssh keys 8 server = compute.servers.bootstrap(server_attributes)
  • 42. Servers? 1 compute.servers # list servers, same as #all
  • 43. Servers? 1 compute.servers # list servers, same as #all 2 3 compute.servers.get(1234567890) # server by id
  • 44. Servers? 1 compute.servers # list servers, same as #all 2 3 compute.servers.get(1234567890) # server by id 4 5 compute.servers.reload # update to latest
  • 45. Servers? 1 compute.servers # list servers, same as #all 2 3 compute.servers.get(1234567890) # server by id 4 5 compute.servers.reload # update to latest 6 7 compute.servers.new(attributes) # local model
  • 46. Servers? 1 compute.servers # list servers, same as #all 2 3 compute.servers.get(1234567890) # server by id 4 5 compute.servers.reload # update to latest 6 7 compute.servers.new(attributes) # local model 8 9 compute.servers.create(attributes) # remote model
  • 47. ping
  • 48. ping 1 # ping target 10 times 2 ssh_results = server.ssh("ping -c 10 #{target}")
  • 49. ping 1 # ping target 10 times 2 ssh_results = server.ssh("ping -c 10 #{target}") 3 stdout = ssh_results.first.stdout
  • 50. ping 1 # ping target 10 times 2 ssh_results = server.ssh("ping -c 10 #{target}") 3 stdout = ssh_results.first.stdout 4 5 # parse result, last line is summary 6 # round-trip min/avg/max/stddev = A.A/B.B/C.C/D.D ms
  • 51. ping 1 # ping target 10 times 2 ssh_results = server.ssh("ping -c 10 #{target}") 3 stdout = ssh_results.first.stdout 4 5 # parse result, last line is summary 6 # round-trip min/avg/max/stddev = A.A/B.B/C.C/D.D ms 7 stats = stdout.split("/n").last.split(' ')[-2] 8 min, avg, max, stddev = stats.split('/')
  • 52. ping 1 # ping target 10 times 2 ssh_results = server.ssh("ping -c 10 #{target}") 3 stdout = ssh_results.first.stdout 4 5 # parse result, last line is summary 6 # round-trip min/avg/max/stddev = A.A/B.B/C.C/D.D ms 7 stats = stdout.split("/n").last.split(' ')[-2] 8 min, avg, max, stddev = stats.split('/') NOTE: most complex code was string parsing!?!
  • 54. cleanup  1 # shutdown the server  2 server.destroy
  • 55. cleanup  1 # shutdown the server  2 server.destroy  3  4 # return the data as a hash  5 {  6   :min    => min,  7   :avg    => avg,  8   :max    => max,  9   :stddev => stddev 10 }
  • 56. Next!
  • 57. Next!  1 -server_data = compute.create_server(  2 +compute.import_key_pair(  3 +  'id_rsa.pub',  4 +  File.read('~/.ssh/id_rsa.pub')  5 +)  6 +  7 +compute.authorize_security_group_ingress(  8 +  'CidrIp'      => '0.0.0.0/0',  9 +  'FromPort'    => 22, 10 +  'IpProtocol'  => 'tcp', 11 +  'GroupName'   => 'default', 12 +  'ToPort'      => 22 13 +) 14 + 15 +server_data = compute.run_instances( 16 +  'ami-1a837773', 17    1, 18 -  49 19 -).body['server'] 20 +  1, 21 +  'InstanceType'  => 'm1.small', 22 +  'KeyName'       => 'id_rsa.pub', 23 +  'SecurityGroup' => 'default' 24 +).body['instancesSet'].first 25   26 -until compute.get_server_details( 27 -  server_data['id'] 28 -).body['server']['status'] == 'ACTIVE' 29 +until compute.describe_instances( 30 +  'instance-id' => server_data['instanceId'] 31 +).body['reservationSet'].first['instancesSet'].first['instanceState']['name'] == 'running' 32  end 33   34 +sleep(300) 35 + 36  Net::SSH.start( 37 -  server_data['addresses'].first, 38 -  'root', 39 -  :password => server_data['password'] 40 +  server_data['ipAddress'], 41 +  'ubuntu', 42 +  :key_data => [File.read('~/.ssh/id_rsa')] 43  ) do |ssh| 44    commands = [ 45      %{'mkdir .ssh'},
  • 58. Next!  1 -server_data = compute.create_server(  2 +compute.import_key_pair(  3 +  'id_rsa.pub',  4 +  File.read('~/.ssh/id_rsa.pub')  5 +)  6 +  7 +compute.authorize_security_group_ingress(  8 +  'CidrIp'      => '0.0.0.0/0',  9 +  'FromPort'    => 22, 10 +  'IpProtocol'  => 'tcp', 11 +  'GroupName'   => 'default', 12 +  'ToPort'      => 22 13 +) 14 + 15 +server_data = compute.run_instances( 16 +  'ami-1a837773', 17    1, 18 -  49 19 -).body['server'] 20 +  1, 21 +  'InstanceType'  => 'm1.small', 22 +  'KeyName'       => 'id_rsa.pub', 23 +  'SecurityGroup' => 'default' 24 +).body['instancesSet'].first 25   26 -until compute.get_server_details( 27 -  server_data['id'] 28 -).body['server']['status'] == 'ACTIVE' 29 +until compute.describe_instances( 30 +  'instance-id' => server_data['instanceId'] 31 +).body['reservationSet'].first['instancesSet'].first['instanceState']['name'] == 'running' 32  end 33   34 +sleep(300) 35 + 36  Net::SSH.start( 37 -  server_data['addresses'].first, 38 -  'root', 39 -  :password => server_data['password'] 40 +  server_data['ipAddress'], 41 +  'ubuntu', 42 +  :key_data => [File.read('~/.ssh/id_rsa')] 43  ) do |ssh| 44    commands = [ 45      %{'mkdir .ssh'},
  • 59. geopinging v1  1 # specify a different provider  2 credentials = {  3   :provider  => 'AWS',  4   :aws_access_key_id  => AWS_ACCESS_KEY_ID,  5   :aws_secret_access_key => AWS_SECRET_ACCESS_KEY  6 }  7  8 server_attributes = {  9   :image_id         => 'ami-1a837773', 10   :private_key_path => PRIVATE_KEY_PATH, 11   :public_key_path  => PUBLIC_KEY_PATH, 12   :username         => 'ubuntu' 13 }
  • 60. geopinging v2 1 # specify a different aws region 2 # ['ap-southeast-1', 'eu-west-1', 'us-west-1] 3 credentials.merge!({ 4   :region => 'eu-west-1' 5 })
  • 61. geopinging v... portable - AWS, Bluebox, Brightbox, Rackspace, Slicehost, Terremark, ...
  • 62. geopinging v... portable - AWS, Bluebox, Brightbox, Rackspace, Slicehost, Terremark, ... lather, rinse, repeat
  • 63. How? That is awesome, but how did you...
  • 64. exploring geymus ~ ⌘ fog   To run as 'default', add the following to ~/.fog :default:   :aws_access_key_id:     INTENTIONALLY_LEFT_BLANK   :aws_secret_access_key: INTENTIONALLY_LEFT_BLANK   :public_key_path:       INTENTIONALLY_LEFT_BLANK   :private_key_path:      INTENTIONALLY_LEFT_BLANK   :rackspace_api_key:     INTENTIONALLY_LEFT_BLANK   :rackspace_username:    INTENTIONALLY_LEFT_BLANK ...
  • 67. sign posts geymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace
  • 68. sign posts geymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace >> providers
  • 69. sign posts geymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace >> providers [AWS, Rackspace]
  • 70. sign posts geymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace >> providers [AWS, Rackspace] >> Rackspace.collections
  • 71. sign posts geymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace >> providers [AWS, Rackspace] >> Rackspace.collections [:directories, :files, :flavors, :images, :servers]
  • 72. sign posts geymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace >> providers [AWS, Rackspace] >> Rackspace.collections [:directories, :files, :flavors, :images, :servers] >> Rackspace[:compute]
  • 73. sign posts geymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace >> providers [AWS, Rackspace] >> Rackspace.collections [:directories, :files, :flavors, :images, :servers] >> Rackspace[:compute] #<Fog::Rackspace::Compute ...>
  • 74. sign posts geymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace >> providers [AWS, Rackspace] >> Rackspace.collections [:directories, :files, :flavors, :images, :servers] >> Rackspace[:compute] #<Fog::Rackspace::Compute ...> >> Rackspace[:compute].requests
  • 75. sign posts geymus ~ ⌘ fog Welcome to fog interactive! :default credentials provide AWS and Rackspace >> providers [AWS, Rackspace] >> Rackspace.collections [:directories, :files, :flavors, :images, :servers] >> Rackspace[:compute] #<Fog::Rackspace::Compute ...> >> Rackspace[:compute].requests [:confirm_resized_server, ..., :update_server]
  • 77. what are those? provider => [AWS, Rackspace, Zerigo, ...]
  • 78. what are those? provider => [AWS, Rackspace, Zerigo, ...] service => [Compute, DNS, Storage, ...]
  • 79. what are those? provider => [AWS, Rackspace, Zerigo, ...] service => [Compute, DNS, Storage, ...] collection => [flavors, images, servers, ...]
  • 80. what are those? provider => [AWS, Rackspace, Zerigo, ...] service => [Compute, DNS, Storage, ...] collection => [flavors, images, servers, ...] model => [flavor, image, server, ...]
  • 81. what are those? provider => [AWS, Rackspace, Zerigo, ...] service => [Compute, DNS, Storage, ...] collection => [flavors, images, servers, ...] model => [flavor, image, server, ...] request => [describe_instances, run_instances, ...]
  • 86. requests? >> Rackspace[:compute].list_servers #<Excon::Response:0x________ @body = { "servers" => [] }, @headers = { "X-PURGE-KEY"=>"/______/servers", ..., "Connection"=>"keep-alive" },
  • 87. requests? >> Rackspace[:compute].list_servers #<Excon::Response:0x________ @body = { "servers" => [] }, @headers = { "X-PURGE-KEY"=>"/______/servers", ..., "Connection"=>"keep-alive" }, @status=200>
  • 89. sanity check >> Rackspace.servers.select {|server| server.ready?}
  • 90. sanity check >> Rackspace.servers.select {|server| server.ready?} <Fog::Rackspace::Compute::Servers filters={} [] >
  • 91. sanity check >> Rackspace.servers.select {|server| server.ready?} <Fog::Rackspace::Compute::Servers filters={} [] > >> AWS.servers.select {|server| server.ready?}
  • 92. sanity check >> Rackspace.servers.select {|server| server.ready?} <Fog::Rackspace::Compute::Servers filters={} [] > >> AWS.servers.select {|server| server.ready?} <Fog::AWS::Compute::Servers [] >
  • 93. sanity check >> Rackspace.servers.select {|server| server.ready?} <Fog::Rackspace::Compute::Servers filters={} [] > >> AWS.servers.select {|server| server.ready?} <Fog::AWS::Compute::Servers [] > >> exit
  • 96. finding images >> Rackspace.images.table([:id, :name]) +--------- +------------------------------------------+ | id | name | +--------- +------------------------------------------+ | 49 | Ubuntu 10.04 LTS (lucid) | +--------- +------------------------------------------+ ...
  • 97. finding images >> Rackspace.images.table([:id, :name]) +--------- +------------------------------------------+ | id | name | +--------- +------------------------------------------+ | 49 | Ubuntu 10.04 LTS (lucid) | +--------- +------------------------------------------+ ... >> AWS.images # I use alestic.com listing
  • 98. finding images >> Rackspace.images.table([:id, :name]) +--------- +------------------------------------------+ | id | name | +--------- +------------------------------------------+ | 49 | Ubuntu 10.04 LTS (lucid) | +--------- +------------------------------------------+ ... >> AWS.images # I use alestic.com listing ...
  • 100. exploring... It takes forever!
  • 101. exploring... It takes forever! It’s so expensive!
  • 102. exploring... It takes forever! It’s so expensive! A warm welcome for Fog.mock!
  • 103. Mocks! geymus ~ ⌘ FOG_MOCK=true fog or require ‘fog’ Fog.mock!
  • 106. simulation Most functions just work! Unimplemented mocks? Errors keep you on track.
  • 107. simulation Most functions just work! Unimplemented mocks? Errors keep you on track. Tests run against both, so it is either consistent or a bug.
  • 108. Back to Business I have a bunch of data now what?
  • 109. Back to Business I have a bunch of data now what? storage - aggregating cloud data
  • 110. Get Connected 7 # setup a connection to the service 8 storage = Fog::Storage.new(credentials)
  • 111. Get Connected 1 credentials = { 2   :provider  => 'AWS', 3   :aws_access_key_id  => AWS_ACCESS_KEY_ID, 4   :aws_secret_access_key => AWS_SECRET_ACCESS_KEY 5 } 6 7 # setup a connection to the service 8 storage = Fog::Storage.new(credentials)
  • 112. directories 1 # create a directory 2 directory = storage.directories.create( 3   :key    => directory_name, 4   :public => true 5 )
  • 113. files
  • 114. files 1 # store the file 2 file = directory.files.create( 3   :body   => File.open(path), 4   :key    => name, 5   :public => true 6 )
  • 115. files 1 # store the file 2 file = directory.files.create( 3   :body   => File.open(path), 4   :key    => name, 5   :public => true 6 ) 7 8 # return the public url for the file 9 file.public_url
  • 116. geostorage  1 # specify a different provider  2 credentials = {  3   :provider           => 'Rackspace',  4   :rackspace_api_key  => RACKSPACE_API_KEY,  5   :rackspace_username => RACKSPACE_USERNAME  6 }
  • 119. cleanup geymus ~ ⌘ fog ... >> directory = AWS.directories.get(DIRECTORY_NAME) ...
  • 120. cleanup geymus ~ ⌘ fog ... >> directory = AWS.directories.get(DIRECTORY_NAME) ... >> directory.files.each {|file| file.destroy} ...
  • 121. cleanup geymus ~ ⌘ fog ... >> directory = AWS.directories.get(DIRECTORY_NAME) ... >> directory.files.each {|file| file.destroy} ... >> directory.destroy ...
  • 122. cleanup geymus ~ ⌘ fog ... >> directory = AWS.directories.get(DIRECTORY_NAME) ... >> directory.files.each {|file| file.destroy} ... >> directory.destroy ... >> exit
  • 123. geoaggregating portable - AWS, Google, Local, Rackspace
  • 124. geoaggregating portable - AWS, Google, Local, Rackspace lather, rinse, repeat
  • 125. Phase 3: Profit I’ve got the data, but how do I freemium?
  • 126. Phase 3: Profit I’ve got the data, but how do I freemium? dns - make your cloud (premium) accessible
  • 127. Get Connected 7 # setup a connection to the service 8 dns = Fog::DNS.new(credentials)
  • 128. Get Connected 1 credentials = { 2   :provider  => 'Zerigo', 3   :zerigo_email => ZERIGO_EMAIL, 4   :zerigo_token => ZERIGO_TOKEN 5 } 6 7 # setup a connection to the service 8 dns = Fog::DNS.new(credentials)
  • 129. zones 1 # create a zone 2 zone = dns.zone.create( 3   :domain => domain_name, 4   :email  => "admin@#{domain_name}" 5 )
  • 130. records 1 # create a record 2 record = zones.records.create( 3   :ip   => '1.2.3.4', 4   :name => "#{customer_name}.#{domain_name}", 5   :type => 'A' 6 )
  • 131. cleanup geymus ~ ⌘ fog ... >> zone = Zerigo.zones.get(ZONE_ID) ... >> zone.records.each {|record| record.destroy} ... >> zone.destroy ... >> exit
  • 132. geofreemiuming portable - AWS, Linode, Slicehost, Zerigo
  • 133. geofreemiuming portable - AWS, Linode, Slicehost, Zerigo lather, rinse, repeat
  • 135. Congratulations! todo - copy/paste, push, deploy!
  • 136. Congratulations! todo - copy/paste, push, deploy! budgeting - find ways to spend your pile of money
  • 137. Congratulations! todo - copy/paste, push, deploy! budgeting - find ways to spend your pile of money geemus - likes coffee, bourbon, games, etc
  • 138. Congratulations! todo - copy/paste, push, deploy! budgeting - find ways to spend your pile of money geemus - likes coffee, bourbon, games, etc retire - at your earliest convenience
  • 139. Love!
  • 140. Love! knowledge - suffering encoded in ruby
  • 141. Love! knowledge - expertise encoded in ruby
  • 142. Love! knowledge - expertise encoded in ruby empowering - show the cloud who is boss
  • 143. Love! knowledge - expertise encoded in ruby empowering - show the cloud who is boss exciting - this is some cutting edge stuff!
  • 145. Homework: Easy follow @fog to hear about releases
  • 146. Homework: Easy follow @fog to hear about releases follow github.com/geemus/fog to hear nitty gritty
  • 147. Homework: Easy follow @fog to hear about releases follow github.com/geemus/fog to hear nitty gritty proudly display stickers wherever hackers are found
  • 148. Homework: Easy follow @fog to hear about releases follow github.com/geemus/fog to hear nitty gritty proudly display stickers wherever hackers are found ask geemus your remaining questions
  • 149. Homework: Easy follow @fog to hear about releases follow github.com/geemus/fog to hear nitty gritty proudly display stickers wherever hackers are found ask geemus your remaining questions play games with geemus
  • 151. Homework: Normal report issues at github.com/geemus/fog/issues
  • 152. Homework: Normal report issues at github.com/geemus/fog/issues irc #ruby-fog on freenode
  • 153. Homework: Normal report issues at github.com/geemus/fog/issues irc #ruby-fog on freenode discuss groups.google.com/group/ruby-fog
  • 154. Homework: Normal report issues at github.com/geemus/fog/issues irc #ruby-fog on freenode discuss groups.google.com/group/ruby-fog write blog posts
  • 155. Homework: Normal report issues at github.com/geemus/fog/issues irc #ruby-fog on freenode discuss groups.google.com/group/ruby-fog write blog posts give lightning talks
  • 157. Homework: Hard help make fog.io the cloud services resource for ruby
  • 158. Homework: Hard help make fog.io the cloud services resource for ruby send pull requests fixing issues or adding features
  • 159. Homework: Hard help make fog.io the cloud services resource for ruby send pull requests fixing issues or adding features proudly wear contributor-only grey shirt wherever hackers are found
  • 161. Homework: Expert help maintain the cloud services you depend on
  • 162. Homework: Expert help maintain the cloud services you depend on become a collaborator by keeping informed and involved
  • 163. Homework: Expert help maintain the cloud services you depend on become a collaborator by keeping informed and involved proudly wear commit-only black shirt wherever hackers are found
  • 164. Thanks! @geemus - questions, comments, suggestions
  • 165. Thanks! Questions? (see also: README) examples - http://gist.github.com/729992 slides - http://slidesha.re/hR8sP9 repo - http://github.com/geemus/fog bugs - http://github.com/geemus/fog/issues @geemus - questions, comments, suggestions

Notes de l'éditeur

  1. \n
  2. \n
  3. \n
  4. \n
  5. \n
  6. \n
  7. \n
  8. \n
  9. \n
  10. \n
  11. \n
  12. \n
  13. \n
  14. \n
  15. \n
  16. \n
  17. \n
  18. \n
  19. \n
  20. \n
  21. \n
  22. \n
  23. \n
  24. \n
  25. \n
  26. \n
  27. \n
  28. \n
  29. \n
  30. \n
  31. \n
  32. \n
  33. \n
  34. \n
  35. \n
  36. \n
  37. \n
  38. \n
  39. \n
  40. \n
  41. \n
  42. \n
  43. \n
  44. \n
  45. \n
  46. \n
  47. \n
  48. \n
  49. \n
  50. \n
  51. \n
  52. \n
  53. \n
  54. \n
  55. \n
  56. \n
  57. \n
  58. \n
  59. \n
  60. \n
  61. \n
  62. \n
  63. \n
  64. \n
  65. \n
  66. \n
  67. \n
  68. \n
  69. \n
  70. \n
  71. \n
  72. \n
  73. \n
  74. \n
  75. \n
  76. \n
  77. \n
  78. \n
  79. \n
  80. \n
  81. \n
  82. \n
  83. \n
  84. \n
  85. \n
  86. \n
  87. \n
  88. \n
  89. \n
  90. \n
  91. \n
  92. \n
  93. \n
  94. \n
  95. \n
  96. \n
  97. \n
  98. \n
  99. \n
  100. \n
  101. \n
  102. \n
  103. \n
  104. \n
  105. \n
  106. \n
  107. \n
  108. \n
  109. \n
  110. \n
  111. \n
  112. \n
  113. \n
  114. \n
  115. \n
  116. \n
  117. \n
  118. \n
  119. \n
  120. \n
  121. \n
  122. \n
  123. \n
  124. \n
  125. \n
  126. \n
  127. \n
  128. \n
  129. \n
  130. \n
  131. \n
  132. \n
  133. \n
  134. \n
  135. \n
  136. \n
  137. \n
  138. \n
  139. \n
  140. \n
  141. \n