SlideShare une entreprise Scribd logo
1  sur  27
Télécharger pour lire hors ligne
A quick tour of Autonomic Computing
      Skill Level: Introductory


      Daniel Worden
      Author

      Nicholas Chase
      Author



      07 Apr 2004


      Autonomic computing architecture is a range of software technologies that enable
      you to build an information infrastructure that can, to lesser and greater degrees,
      manage itself, saving countless hours (and dollars) in human management. And all
      this without giving up control of the system. This tutorial explains the concepts behind
      autonomic computing and looks at the tools at your disposal for making it happen -
      today.


      Section 1. Before you start

      About this tutorial
      This tutorial explains the general concepts behind autonomic computing
      architectures, including control loops and autonomic-managed resources. It also
      discusses the various tools that are currently available within the IBM Autonomic
      Computing Toolkit, providing a concrete look at some of the tasks you can
      accomplish with it.

      The tutorial covers:

                • Autonomic computing concepts
                • Contents of the IBM Autonomic Computing Toolkit
                • Installing applications using solution installation and deployment
                  technologies
                • Application administration using the Integrated Solutions Console


A quick tour of Autonomic Computing
© Copyright IBM Corporation 1994, 2006. All rights reserved.                              Page 1 of 27
developerWorks®                                                                         ibm.com/developerWorks




               • Problem determination using the Generic Log Adapter and the Log and
                 Trace Analyzer
               • A brief look at Common Base Events
               • Autonomic management using the Autonomic Management Engine
                 (AME)
               • AME Resource Models and a brief look at programming using AME
     When you have completed this tutorial, you should feel comfortable looking further
     into any of these topics, while understanding their place in the autonomic computing
     infrastructure.

     This tutorial is for developers who not only want to understand the concepts behind
     autonomic computing architectures, but who also want begin implementing
     autonomic computing solutions in their applications with the help of the IBM
     Autonomic Computing Toolkit.

     Although this tutorial is aimed at developers, no actual programming is required in
     order to get the full value from it. When present, code samples are shown using the
     Java language, but an in-depth understanding of the code is not required to gain
     insight into the concepts they illustrate.


     Prerequisites
     This tutorial doesn't actually require you to install any tools, but it does look at the
     following autonomic computing bundles, available from developerWorks at:
     http://www.ibm.com/developerworks/autonomic/probdet1.html.

               • Autonomic Management Engine
               • Resource Model Builder
               • Integrated Solutions Console
               • Solution installation and deployment scenarios
               • Generic Log Adapter and Log and Trace Analyzer Tooling
               • Generic Log Adapter Runtime and Rule Sets
               • Problem Determination Scenario
               • Eclipse Tooling




     Section 2. What is autonomic computing?


A quick tour of Autonomic Computing
Page 2 of 27                                           © Copyright IBM Corporation 1994, 2006. All rights reserved.
ibm.com/developerWorks                                                                   developerWorks®




      The current state of affairs
      As long-time developers, we've found that the number one most underestimated
      area of software development is maintenance. Everybody wants to build the
      application, but once it's out there, developers just want to forget about it. They tend
      not to think about the administrators who have to spend hours and days and even
      weeks configuring their systems for the software to work right, or to prevent
      problems such as full drives and bad equipment. As a general rule, a developer is
      only worried about what needs to be done to the system to install his or her own
      product; the effects on other, previously installed products are frequently overlooked.
      What's more, optimizing a setup can be a risky process, with changes made in one
      area causing problems in another. Management of multiple systems usually means
      learning several different management applications.

      How many hours, and by extension dollars (or Euros, or yen, or rupees) are
      consumed just keeping software up and running? Surely there are more productive
      things these human resources can be doing, such as planning to make the business
      run better?

      In short, there's a need in today's computing infrastructure for a better way to do
      things. One way to reach that goal is through autonomic computing technology.


      Where we want to go
      A system built on autonomic computing concepts tries to eliminate much of that
      potentially wasted or duplicated effort. Look at the current state of affairs, but this
      time, consider how to apply the principles of autonomic computing technology.

      First, the software installation process should truly be a one-click process. The
      software would know not only what dependencies must be fulfilled for its own
      operation, but also how to get those dependencies met if the system didn't already
      have them in place. Examples would include such things as downloading operating
      system patches and provisioning additional drive space. The install would know how
      to avoid conflicts with already deployed applications. Once installed, it would be
      managed from a single, common, management interface used for all applications.
      After the software was up and running, it would consistently monitor itself and the
      environment for problems. When detected, it would determine the source of the error
      and resolve it. All of this would be managed from a consolidated point, so the CEO
      or system administrator could see at a glance who had access to what, even though
      resources might be spread over multiple applications, platforms and locations.

      In other words, you're looking for a system that is self-configuring, self-healing,
      self-protecting, and self-optimizing. So how do you get it?


      Autonomic infrastructure


A quick tour of Autonomic Computing
© Copyright IBM Corporation 1994, 2006. All rights reserved.                                    Page 3 of 27
developerWorks®                                                                         ibm.com/developerWorks



     In order to create such a system, you've got to create an infrastructure that enables
     a system to monitor itself and take action when changes occur. To accomplish that,
     an autonomic computing solution is made up of a an autonomic manager and
     managed resources. The autonomic manager and the managed resources talk to
     each other using the standard Sensor and Effector APIs delivered through the
     managed resource touchpoint.

     The managed resources are controlled system components that can range from
     single resources such as a server, database server, or router to collections of
     resources like server pools, clusters, or business applications.

     The autonomic manager implements autonomic control loops by dividing them into
     four parts: monitor, analyze, plan, and execute. The control loop carries out tasks as
     efficiently as possible based on high-level policies.

               • Monitor: Through information received from sensors, the resource
                 monitors the environment for specific, predefined conditions. These
                 conditions don't have to be errors; they can be a certain load level or type
                 of request.
               • Analyze: Once the condition is detected, what does it mean? The
                 resource must analyze the information to determine whether action
                 should be taken.
               • Plan: If action must be taken, what action? The resource might simply
                 notify the administrator, or it might take more extensive action, such as
                 provisioning another hard drive.
               • Execute: It is this part of the control loop that sends the instruction to the
                 effector, which actually affects or carries out the planned actions.
     Figure 1. Autonomic control loops




A quick tour of Autonomic Computing
Page 4 of 27                                           © Copyright IBM Corporation 1994, 2006. All rights reserved.
ibm.com/developerWorks                                                                 developerWorks®




      All of the actions in the autonomic control loop either make use of or supplement the
      knowledge base for the resource. For example, the knowledge base helps the
      analysis phase of the control loop to understand the information it's getting from the
      monitor phase. It also provides the plan phase with information that helps it select
      the action to be performed.

      Note that the entire autonomic control loop need not be handled by a single
      application. For example, you might have one application (such as IBM Director) that
      handles the Monitor and Analyze phases, while a second product (such as Toshiba
      ClusterPerfect) handles the plan and execute phases.


      Autonomic maturity levels
      It would be unreasonable to expect all software and system administration
      processes to suddenly go from a completely manual state to a completely autonomic
      state. Fortunately, with the autonomic computing model, that's not a requirement. In
      fact, there are typically five levels of autonomic maturity, which are measures of
      where any given process is on that spectrum. These maturity levels are:

                • Basic - Personnel hold all of the important information about the product
                  and environment. Any action, including routine maintenance, must be
                  planned and executed by humans.
                • Managed - Scripting and logging tools automate routine execution and
                  reporting. Individual specialists review information gathered by the tools to


A quick tour of Autonomic Computing
© Copyright IBM Corporation 1994, 2006. All rights reserved.                                Page 5 of 27
developerWorks®                                                                       ibm.com/developerWorks



                   make plans and decisions.
               • Predictive - As preset thresholds are tripped, the system raises early
                 warning flags and recommends appropriate actions from the knowledge
                 base. The centralized storage of common occurrences and experience
                 also leverages the resolution of events.
               • Adaptive - Building on the predictive capabilities, the adaptive system is
                 empowered to take action based on the situation.
               • Autonomic - Policy drives system activities, including allocation of
                 resources within a prioritization framework and acquisition of prerequisite
                 dependencies from outside sources.
     Most systems today are at the basic or managed level, though there are, of course,
     exceptions. Take a look at areas in which systems most need improvement, and the
     tools that can makes this possible.


     The Autonomic Computing Toolkit
     To make it easier to develop an autonomic solution, IBM has put together the
     Autonomic Computing Toolkit. Strictly speaking, this is not a single toolkit, but rather
     several "bundles" of applications intended to work together to produce a complete
     solution. These bundles include:

               • Autonomic Management Engine
               • Resource Model Builder
               • Integrated Solutions Console
               • Solution installation and deployment
               • Generic Log Adapter and Log and Trace Analyzer
     You can use each of these tools individually, or you can combine them for a larger
     scope solution. In fact, the toolkit also includes a number of "scenarios" that
     demonstrate the use of some of these applications and their integration, including:

               • Problem Determination Scenario
               • Solution installation and deployment scenario using ISSI
               • Solution installation and deployment scenario using InstallAnywhere
               • Solution installation and deployment samples scenario
     The toolkit also includes the Autonomic Computing Information bundle, which
     includes much of the available documentation for the other bundles.

     If you do not already have Eclipse 3.0 or higher, the toolkit provides an Eclipse
     Tooling bundle that includes Eclipse runtimes needed by toolkit components.



A quick tour of Autonomic Computing
Page 6 of 27                                         © Copyright IBM Corporation 1994, 2006. All rights reserved.
ibm.com/developerWorks                                                                 developerWorks®



      Now, take a look at what these bundles provide.


      Installation and administration
      Before you even think about problem resolution (which we'll talk about next) a
      software application has to be up and running. That means installation and
      configuration. We'll start with installation.

      The solution installation and deployment capabilities that come with the IBM
      Autonomic Computing Toolkit enable you to build an installation engine that can
      detect missing dependencies and take action. It also tracks the applications that
      have been installed and their dependencies, so subsequent installs - and uninstalls
      -- don't step on each other, even if they come from different vendors. (As long as
      they're all autonomically aware, of course.) It does that by maintaining a database of
      conditions, and checking it before performing the installation. It also communicates
      with the platform through touchpoints . These server-specific interfaces provide data
      on available resources, installed components, and their relationships. Touchpoints
      are a key enabler for autonomic dependency checking. For more information on the
      dependency checker that comes with the solution installation and deployment
      technologies, see Installing a solution .

      Once you've installed the software, you'll need to administer it. Rather than forcing
      you to learn a new management console for every application, the IBM Autonomic
      Computing Toolkit provides the Integrated Solutions Console. This console is based
      on a portal infrastructure, so any autonomically aware application can install
      components for the administrator to use within this single console. The Integrated
      Solutions Console also provides for centralized permissions management through
      access to a credential vault. In this way, single sign-on, system access, and
      admnistration can be consolidated and controlled for multiple servers. For more
      information on the Integrated Solutions Console, see The Integrated Solutions
      Console.


      Problem determination and analysis
      The software is installed, configured, and running. This is of course when something
      goes horribly wrong. With the old way of doing things, you'd have to sort through
      possibly megabytes of log files looking for something you personally might not even
      recognize. In an autonomic solution, events such as log entries are presented in a
      common form (the Common Base Events format), correlated into a situation, and
      compared to a symptom database to determine the problem and potential actions.

      To accomplish all that, the Autonomic Computing Toolkit includes the Generic Log
      Adapter (GLA) and the Log and Trace Analyzer. The GLA converts logs from legacy
      applications into Common Base Events that can then be read by the LTA. The LTA
      consults a symptom database to correlate the events, identify the problem, and
      perform the actions prescribed for that problem. For more information on these tools,
      see The Generic Log Adapter.



A quick tour of Autonomic Computing
© Copyright IBM Corporation 1994, 2006. All rights reserved.                               Page 7 of 27
developerWorks®                                                                      ibm.com/developerWorks



     Analysis can take place in a number of ways. For example, you might write
     JavaBeans or other code to analyze events and situations.

     Or you might find it simpler to use a tool that performs much of the analysis for you,
     such as the Autonomic Management Engine.


     Monitoring and centralized management
     The Autonomic Management Engine (AME) is meant to be a fairly self-contained
     autonomic computing solution, in that it can monitor the environment and an
     application, analyze any conditions that surface, plan a response, and execute it. It
     is an extensive monitoring engine that uses resource models as its source of
     information. A resource model tells AME what to look for, such as the termination of
     a process or a particular log event, yet because it is flexible, AME can monitor any
     application. All you need to do is provide an appropriate resource model.

     To do that, you can use the IBM Autonomic Computing Toolkit's Resource Model
     Builder, which is a plugin for the Eclipse IDE.


     Optimization and other issues
     Some areas of the autonomic computing vision are not yet available as part of this
     version of the IBM Autonomic Computing Toolkit, but can be realized using
     independent products. For example, a major goal of autonomic computing
     technology is to create a system that is self-optimizing. An orchestration product
     such as IBM Tivoli#173; Intelligent ThinkDynamic Orchestrator lets you monitor the
     overall system using autonomic computing concepts to determine when an action
     needs to be taken. The Orchestrator then takes appropriate action as defined by the
     administrator or per business policy.

     You can also provide for autonomic management of provisioning issues such as
     allocating or deallocating servers based on the relative loads of applications and
     their importance using a product such as Adaptive Replacement Cache from IBM,
     which yields significant performance increases over least-recently-used (LRU)
     caching strategies.

     Whether you are coding multiserver load balancing applications, or simply looking
     for ways to improve database query performance, the autonomic computing model
     from IBM provides tools and techniques to move your code to the next level of
     autonomic maturity.

     Another goal of autonomic computing technology is heterogeneous workload
     management, in which the entire application as a whole can be optimized. One
     example would be a system that includes a web application server, a database
     server, and remote data storage. Heterogeneous workload management makes it
     possible to find bottlenecks in the overall system and resolve them. Products such
     as the Enterprise Workload Manager (EWLM) component of the IBM Virtualization
     Engine provide hetergeneous workload management capabilities. They enable you


A quick tour of Autonomic Computing
Page 8 of 27                                        © Copyright IBM Corporation 1994, 2006. All rights reserved.
ibm.com/developerWorks                                                                developerWorks®



      to automatically monitor and manage multi-tiered, distributed, heterogeneous or
      homogeneous workloads across an IT infrastructure to better achieve defined
      business goals for end-user services.

      Now, take a closer look at the tools that make up the Autonomic Computing Toolkit.




      Section 3. Installing a solution

      What solution installation and deployment technologies do
      When you install a software package, you generally have three goals:

                • Satisfy any dependencies or prerequisites
                • Install the actual software
                • Don't break anything that already works
      This last goal is often the toughest part. With today's complex software, version
      conflicts often cause problems. Solution installation and deployment technologies
      solve many of these problems by maintaining a database of software that's already
      been installed and the relevant dependencies.

      When you first attempt to install a new piece of software, the solution installation and
      deployment technologies check to see if the prerequisites are met by using web
      services to access "touchpoints," which provide information such as the available
      RAM and disc space, operating system, and so on. It also checks the database of
      installed software for other required applications, as well as checking to make sure
      that installing this new application isn't going to conflict with prerequisites for any
      existing applications.

      When the installation is complete, the details of the new application are added to the
      solution installation and deployment database so that all the information is available
      for the next install.

      This information also comes in handy when you attempt to uninstall a software
      solution; a central database of prerequisites makes it easier to know what you can
      and cannot safely delete.


      Installable units
      The first thing to understand about solution installation and deployment is that there
      are two major pieces involved. The first is the actual installer. This is the
      infrastructure that includes the installation database and the code that interprets and


A quick tour of Autonomic Computing
© Copyright IBM Corporation 1994, 2006. All rights reserved.                               Page 9 of 27
developerWorks®                                                                        ibm.com/developerWorks



     executes the actions specified in the package itself. The second is the package of
     files to be installed.

     And what about this package? What is it?

     Depending on the situation, a package can consist of a solution module (SM), which
     can contain other solution modules as well as one or more container installable
     units. A container installable unit (CIU) can contain zero or more other container
     installable units, and one or more smallest installable units. The smallest installable
     unit is exactly what it sounds like: a single piece of software that includes files to be
     copied to the target system.




     It's important to understand that only the SIU contains actual physical files or groups
     of files. The CIU and SM are concepts, specified in the Installable Unit's deployment
     descriptor. This is an XML file that includes information on dependencies and on
     actions that must take place in order for the software to be successfully installed.

     Let's look at the process of creating a simple package to be installed via the solution
     installation and deployment environment.


     Create a package
     Creating the package of software to install entails the following basic steps:

               • Install the solution installation and deployment bundle.
               • Install the Solution Module Descriptor plug-in for Eclipse. (This is a
                 convenience step; the descriptor is simply an XML file, so you can create
                 it with a text editor, but it's complex so you're better off using an editor
                 that understands its structure.)


A quick tour of Autonomic Computing
Page 10 of 27                                         © Copyright IBM Corporation 1994, 2006. All rights reserved.
ibm.com/developerWorks                                                                    developerWorks®




                • Create a project that includes the actual files to install. This may be as
                  straightforward as a text file or as complex as WebSphere Portal. In any
                  case, this project is independent of any solution installer files; it's just the
                  application itself and the application becomes the smallest installable unit.
                • Create the deployment descriptor. This step includes the following tasks:
                     • Create the container installable unit.
                     • Add the SIU to the CIU.
                     • Set any SIU dependencies.
                     • Add actions such as "add file" from the packaged ZIP file to the file
                       system.
                     • Set parameters such as install_dir.
                     • Set the target, including operating system information.
      The descriptions here are intended only to give you an overview of what's involved.
      For a complete look at creating a solution module and deploying it, see the solution
      installation and deployment tutorial. (A link is available in Resources.)


      A sample descriptor
      When you're done, you'll have a deployment descriptor something like this one,
      abridged from the InstallShield for Multiplatforms scenario:
       <?xml version="1.0" encoding="UTF-8"?>
       <com.ibm.iudd.schemas.smd:rootIU
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:com.ibm.iudd.schemas.os.container.actions=
             "http://www.ibm.com/IUDD/schemas/OSContainerActions"
       xmlns:com.ibm.iudd.schemas.smd="http://www.ibm.com/IUDD/schemas/SMD">
          <solutionModule IUname="Sample Install">
            <IUdefinition>
              <identity>
                <name>Sample Install</name>
                <UUID>123456789012345678901234567</UUID>
                <version lev="4" ver="0"/>
                <displayName language_bundle="Sample.properties">
                  <short_default_text key="SAMPLE">"Sample
       Install"</short_default_text>
                  <long_default_text key="Sample_Install_DEMO">"Sample
       Install Demonstration"</long_default_text>
                </displayName>
                <manufacturer>
                  <short_default_text key="IBM"
                          >"IBM"</short_default_text>
                  <long_default_text key="IBM"
                          >"IBM"</long_default_text>
                </manufacturer>
              </identity>
            </IUdefinition>
              <variables>
                <variable name="install_from_path">
                  <parameter defaultValue="install/setup"/>
                  <description>
                    <short_default_text key="JAVA_FROM_PATH">
                      "Getting SI demo files from path:"
                    </short_default_text>



A quick tour of Autonomic Computing
© Copyright IBM Corporation 1994, 2006. All rights reserved.                                  Page 11 of 27
developerWorks®                                                                      ibm.com/developerWorks


                 </description>
               </variable>
               <variable name="install_dir">
                 <parameter defaultValue="/SIDemo"/>
                 <description>
                   <short_default_text key="JAVA_TO_PATH">
                       "Installing SI Demo to directory:"
                   </short_default_text>
                 </description>
               </variable>
             </variables>
            <installableUnit>
              <CIU IUname="WAS_Express_5_0" type=
       "http://w3.ibm.com/namespaces/2003/OS_componentTypes:MicrosoftWindows">
                <IUdefinition>
                  <identity>
                    <name> WAS Express 5.0.2</name>
                    <UUID>123456789012345678901234501</UUID>
                    <version mod="6" rel="0" ver="4"/>
                    <displayName>
                      <short_default_text key="WASExpress">"WAS Express
       5.0.2"</short_default_text>
                    </displayName>
                    <manufacturer>
                      <short_default_text key="IBM_WAS">"IBM WAS Express
       Group"</short_default_text>
                    </manufacturer>
                  </identity>
                </IUdefinition>
                <variables>
                  <variable name="ibm_install_from_path">
                    <parameter defaultValue="install/setup/WAS_Exp"/>
                    <description>
                      <short_default_text key="INSTALL_FROM_PATH"
           >"Installing WAS Express from path: "</short_default_text>
                    </description>
                  </variable>
                  <variable name="ibm_install_drive">
                    <parameter defaultValue="$(install_drive)"/>
                    <description>
                      <short_default_text key="INSTALL_TO_DRIVE"
           >"Installing WAS Express on drive: "</short_default_text>
                    </description>
                  </variable>
                </variables>
                <installableUnit>
                  <SIU IUname="WAS_Express_Application_Server" type=
       "http://w3.ibm.com/namespaces/2003/OS_componentTypes:MicrosoftWindows">
                    <IUdefinition>
                      <identity>
                        <name>J2EE Servlet Container</name>
                        <UUID>123456789012345678901234502</UUID>
                        <version rel="0" ver="1"/>
                        <displayName>
                          <short_default_text key="J2EE_SERV"
                      >"J2EE Servlet Container"</short_default_text>
                        </displayName>
                        <manufacturer>
                          <short_default_text key="IBM_WAS"
                                  >"WAS"</short_default_text>
                        </manufacturer>
                      </identity>
                      <checks>
                        <capacity checkVarName="J2EE_Serv_Processor_Speed"
                                    type="minimum">
                          <description>
                            <short_default_text key="CHECK_CPU"
                        >"Checking CPU speed..."</short_default_text>
                          </description>
                          <propertyName>Processor</propertyName>
                          <value>100</value>
                        </capacity>
                        <consumption checkVarName="J2EE_Serv_Memory">
                          <description>
                            <short_default_text key="CHECK_RAM"
                    >"Checking available RAM..."</short_default_text>


A quick tour of Autonomic Computing
Page 12 of 27                                       © Copyright IBM Corporation 1994, 2006. All rights reserved.
ibm.com/developerWorks                                                                 developerWorks®


                             </description>
                            <propertyName>TotalVisibleMemorySize</propertyName>
                             <value>128</value>
                           </consumption>
                           <consumption checkVarName="J2EE_Serv_Disk_Space">
                             <description>
                               <short_default_text key="CHECK_DSK"
                 >"Checking available disk space..."</short_default_text>
                             </description>
                              <propertyName>Partition</propertyName>
                             <value>18.5</value>
                           </consumption>
                              <custom customCheckIdRef="AFileExistsCheck"
                                       checkVarName="True_File_Check">
                                <parameter variableNameRef="classpath">.</parameter>
                                <parameter variableNameRef="filename">.</parameter>
                              </custom>
                              <custom customCheckIdRef="AFileExistsCheck"
                                       checkVarName="False_File_Check">
                                <parameter variableNameRef="classpath">.</parameter>
                                <parameter variableNameRef="filename"
                                       >/Dummy.txt</parameter>
                              </custom>
                           </checks>
                           <requirements>
                             <requirement name="J2EE_Serv_Requirements"
                                       operations="InstallConfigure">
                               <alternative>
                                 <checkItem checkVarName="J2EE_Serv_Processor_Speed"
                                              />
                                 <checkItem checkVarName="J2EE_Serv_Memory"/>
                                 <checkItem checkVarName="J2EE_Serv_Disk_Space"/>
                                   <checkItem checkVarName="True_File_Check"/>
                                   <checkItem checkVarName="False_File_Check"/>
                          </alternative>
                        </requirement>
                      </requirements>
                    </IUdefinition>
                    <variables>
                      <variable name="J2EE_Serv_Processor_Speed"/>
                      <variable name="J2EE_Serv_Memory"/>
                      <variable name="J2EE_Serv_Disk_Space"/>
                      <variable name="True_File_Check"/>
                      <variable name="False_File_Check"/>
                      <variable name="Filename"/>
                    </variables>
                    <unit xsi:type=
                      "com.ibm.iudd.schemas.os.container.actions:OsActionGroup">
                      <actions>
                        <addDirectory sequenceNumber="0" add="true"
                                   descend_dirs="true">
                          <destination>$(ibm_install_dir)/WAS Express
       5.0</destination>
                        </addDirectory>
                        <addDirectory sequenceNumber="0" add="true"
                                   descend_dirs="true">
                          <destination>$(ibm_install_dir)/WAS Express
       5.0/bin</destination>
                          <source>
                              <source_directory
                            >$(ibm_install_from_path)/was/bin</source_directory>
                          </source>
                        </addDirectory>
                        <addDirectory sequenceNumber="1" add="true"
                                descend_dirs="true">
                          <destination>$(ibm_install_dir)/WAS Express
       5.0/classes</destination>
                        </addDirectory>
                        <addDirectory sequenceNumber="2" add="true"
                                descend_dirs="true">
                          <destination>$(ibm_install_dir)/WAS Express
       5.0/config</destination>


A quick tour of Autonomic Computing
© Copyright IBM Corporation 1994, 2006. All rights reserved.                              Page 13 of 27
developerWorks®                                                                     ibm.com/developerWorks


                           <source>
                             <source_directory
                          >$(ibm_install_from_path)/was/config</source_directory>
                           </source>
                         </addDirectory>
                       </actions>
                     </unit>
                  </SIU>
                </installableUnit>
                <installableUnit>
                   <SIU IUname="Common_WAS_Express_Libraries" type=
         "http://w3.ibm.com/namespaces/2003/OS_componentTypes:MicrosoftWindows">
                     <IUdefinition>
                       <identity>
                         <name>Common WAS Express Libraries</name>
                         <UUID>123456789012345678901234503</UUID>
                         <version mod="6" rel="0" ver="4"/>
                         <displayName>
                           <short_default_text key="WAS_LIB">"Common WAS
       Express Libraries"</short_default_text>
                         </displayName>
                         <manufacturer>
                           <short_default_text key="IBM_WAS">"WAS
       Express"</short_default_text>
                         </manufacturer>
                       </identity>
                     </IUdefinition>
                     <unit xsi:type=
                       "com.ibm.iudd.schemas.os.container.actions:OsActionGroup">
                       <actions>
                         <addDirectory sequenceNumber="0" add="true"
                                  descend_dirs="true">
                           <destination>$(ibm_install_dir)/WAS Express
       5.0/lib</destination>
                           <source>
                             <source_directory
                             >$(ibm_install_from_path)/was/lib</source_directory>
                           </source>
                         </addDirectory>
                       </actions>
                     </unit>
                  </SIU>
                </installableUnit>
              </CIU>
              <target>OperatingSystem</target>
            </installableUnit>
          </solutionModule>
          <rootInfo>
            <schemaVersion mod="1" rel="1" ver="1"/>
            <build>0</build>
            <size>0</size>
          </rootInfo>
          <customChecks>
            <customCheck customCheckId="AFileExistsCheck">
              <invocation_string>java -cp $(classpath) FileExists $(filename)
              </invocation_string>
              <std_output_file>AFileExistsCheck.out</std_output_file>
              <std_error_file>AFileExistsCheck.err</std_error_file>
              <completion_block>
                <return_code to="0" from="0">
                     <severity>SUCCESS</severity>
                </return_code>
                <return_code to="1" from="2">
                     <severity>FAILURE</severity>
                </return_code>
              </completion_block>
              <working_dir>.</working_dir>
              <timeout>100</timeout>
            </customCheck>
          </customChecks>
          <topology>
            <target id="OperatingSystem" type=
       "http://w3.ibm.com/namespaces/2003/OS_componentTypes:Operating_System"/>
            <target id="WindowsTarget" type=
       "http://w3.ibm.com/namespaces/2003/OS_componentTypes:MicrosoftWindows"/>
            <target id="LinuxTarget" type=
       "http://w3.ibm.com/namespaces/2003/OS_componentTypes:Linux"/>


A quick tour of Autonomic Computing
Page 14 of 27                                      © Copyright IBM Corporation 1994, 2006. All rights reserved.
ibm.com/developerWorks                                                               developerWorks®


            <target id="AIXTarget" type=
       "http://w3.ibm.com/namespaces/2003/OS_componentTypes:IBMAIX"/>
          </topology>
          <groups>
            <group>
              <description language_bundle="SIDemo.properties">
                <short_default_text key="GROUP_KEY"/>
              </description>
            </group>
          </groups>
       </com.ibm.iudd.schemas.smd:rootIU>




      Installing a package
      IBM has partnered with two of the leading providers of software installation products,
      InstallShield and Zero G, to integrate these autonomic solution installation and
      deployment technologies into their products. These vendors' installer products are
      demonstrated in scenarios provided with the Autonomic Computing Toolkit.

      For an in-depth explanation of the scenarios and how they work, see the Autonomic
      Computing Toolkit Scenario Guide.




      Section 4. Administering software

      The Integrated Solutions Console
      Once you've installed the software, you have to administer it. In a worst-case
      scenario, every application has its own administration console, a fat client that is
      installed on a particular machine. To administer that application, the right person has
      to sit in front of the right machine. And in each case, the administrator has to learn
      the operation of the unique console and manage permissions for other users through
      it.

      The Integrated Solutions Console solves a lot of these problems. First, it's a
      browser-based application, so the "right person" can administer it from anywhere, as
      long as the computer he or she is sitting in front of can access the server on which
      Integrated Solutions Console is installed over the network. Second, you can use it to
      manage all of your autonomically aware applications, so learning is limited to
      navigating the features of this single console. You can even manage administration
      permissions for accessing other applications through the Integrated Solutions
      Console.

      Let's take a look at how it works.


      The Integrated Solutions Console architecture


A quick tour of Autonomic Computing
© Copyright IBM Corporation 1994, 2006. All rights reserved.                             Page 15 of 27
developerWorks®                                                                       ibm.com/developerWorks



     The Integrated Solutions Console is based on IBM's WebSphere Portal product, in
     that all of the components you use to manage your applications are built as portlets
     and integrated into that web-based application. This architecture enables you to take
     advantage of the framework that is already in place for Portal, such as the Credential
     Vault, discussed in Managing security. As a Web-based application, it can be used
     from multiple machines and by multiple users (as long as they have the appropriate
     permissions), and any number of applications can be integrated into it through the
     use of components constructed in WSAD.

     Components are based on portlets, which are themselves specialized versions of
     Java servlets. We'll discuss the creation of a component in more detail in Creating a
     component, but the basic idea is that you create a small server-side application and
     use XML configuration files to tell the Integrated Solutions Console what to do with it.




     Now let's look at actually creating a component.


     Creating a component


A quick tour of Autonomic Computing
Page 16 of 27                                        © Copyright IBM Corporation 1994, 2006. All rights reserved.
ibm.com/developerWorks                                                                developerWorks®



      It's impossible to describe the complete process of creating an Integrated Solutions
      Console component in just one section -- see Resources for a tutorial on doing just
      that - but here's an overall view of the process:


             1.     Install the Integrated Solutions Console Toolkit for WebSphere
                    Application Developer, Portal Toolkit and the Eclipse help framework with
                    the Integrated Solutions Console installation routine.

             2.     Using WSAD, create a portlet project.

             3.     Create a portlet or portlets to manage your application.

             4.     Create an Integrated Solutions Console Component Project.

             5.     Assign the project to an administration suite.

             6.     Add the portlets to the component.

             7.     Determine the placement for each portlet. Some portlets are displayed in
                    the work pane, some are shown as the result of a message sent from
                    another portlet, and some are only opened within other portlets.

             8.     Determine the layout of the work pane and the order items appear on the
                    navigation tree.

             9.     Set security for the component.

             10. Add help resources.

             11. Export the component project as a .WAR file and deploy it to the
                 Integrated Solutions Console server.

             12. Connect to the Integrated Solutions Console and operate your
                 component.


      Managing security
      One of the strengths of the Integrated Solutions Console is the way in which it eases
      the burden of managing permissions for administrative tasks. In addition to the fact
      that permissions for all of your autonomically aware applications can be managed
      from within the console, you also have the advantage of the layers of abstraction
      that Integrated Solutions Console uses.

      For one thing, you never actually give an individual user permissions for a specific
      resource. Rather, you assign users to groups, or "entitlements," and then you give
      the entitlements permissions on the portlets that control the resource. In other
      words, you can create an entitlement such as MagnetPeople that has administrative
      access to some MagnetApp components and user access to others. To change the
      set of people who have these permissions, you can add people to or remove people


A quick tour of Autonomic Computing
© Copyright IBM Corporation 1994, 2006. All rights reserved.                             Page 17 of 27
developerWorks®                                                                      ibm.com/developerWorks



     from the MagnetPeople group.

     This structure provides a layer of insulation whose value may not be immediately
     obvious. If you want to give user Sarah the ability to make changes to a DB2
     database, under normal circumstances you would have to provide her with a
     username and password for the database itself. In the case of the Integrated
     Solutions Console, you give her permissions on the portlet, and the portlet accesses
     the database. However, the portlet doesn't need to have a username and password
     for the database either. Instead, it can access the Credential Vault (a feature of
     WebSphere Portal that has been included in the Integrated Solutions Console). The
     credential vault "slot" has all of the information needed to access the resource - in
     this case, the database - so neither the user nor the portlet the user is accessing
     ever needs to know the actual authentication information. You can use the credential
     vault not just for userids and passwords, but also for certificates and private keys.




     Section 5. Logging and problems

     The Generic Log Adapter
     The goal of an autonomic system is to create an environment in which all
     applications can be monitored and tracked from within a single location, such as the
     Integrated Solutions Console. For that to happen, the overall system needs to be
     able to understand messages put out by an application and any servers on which it
     depends. That means that logging of exceptions and events must first be translated
     into some sort of common format. In the case of the IBM Autonomic Computing
     Toolkit, that format is called the Common Base Events model. We'll talk about that in
     more detail next, but first let's deal with the issue of how we get to that point.

     If you were building an application from scratch, it would make sense to output all
     events in the Common Base Events format so they could be immediately analyzed
     by the autonomic system. If, however, you have a legacy application that already
     puts out events in some other form, you have a problem. You can either rewrite the
     application to change the log format, which isn't likely, or you can choose to convert
     the logs it does produce to the Common Base Events format.

     The Generic Log Adapter serves this purpose. The GLA is designed to take any text
     formatted log and convert those entries into Common Base Events so that it can be
     used as input by other autonomic tools. To do that you define the transformation
     within Eclipse by creating an adaptor (as we'll discuss in Using the Generic Log
     Adapter), and then run the GLA against the desired target log.

     Before we look at how to do that, let's look at what we're trying to accomplish.




A quick tour of Autonomic Computing
Page 18 of 27                                       © Copyright IBM Corporation 1994, 2006. All rights reserved.
ibm.com/developerWorks                                                                developerWorks®




      Common Base Events
      Common Base Events are just that: a consistent, published way to express the
      events that normally happen during the functioning of an application, such as
      starting or stopping a process or connecting to a resource. The Common Base
      Events specification creates specific situation types, such as START, STOP,
      FEATURE, and CREATE, and specifies how they must be represented. The Common
      Base Events specification also provides a way to identify components experiencing
      or observing events, as well as the correlation of events into situations that describe
      them.

      Common Base Events are expressed as XML, such as this example adapted from
      the specification:
       <?xml version="1.0" encoding="UTF-8"?>
       <CommonBaseEvents xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
              xsi:noNamespaceSchemaLocation="commonbaseevent1_0.xsd">
          <AssociationEngine id="a0000000000000000000000000000000"
                 name="myassociationEngineName" type="Correlated" />
          <CommonBaseEvent creationTime="2001-12-31T12:00:00"
                 elapsedTime="0" extensionName="CommonBaseEvent"
                 globalInstanceId="i0000000000000000000000000000000"
                 localInstanceId="myLocalInstanceId" priority="0"
                 repeatCount="0" sequenceNumber="0" severity="0"
                 situationType="mySituation">
             <contextDataElements name="Name" type="myContextType">
                <contextValue>contextValue</contextValue>
             </contextDataElements>
             <extendedDataElements name="z" type="Integer">
                <values>1</values>
             </extendedDataElements>
             <associatedEvents associationEngine="a0000000000000000000000000000000"
                    resolvedEvents="i0000000000000000000000000000001" />
             <reporterComponentId application="myApplication" component="myComponent"
                    componentIdType="myComponentIdType" executionEnvironment="myExec"
                    instanceId="myInstanceId" location="myLocation"
                    locationType="myLocationType" processId="100"
                    subComponent="mySubComponent" threadId="122" />
             <sourceComponentId application="myApplication1"
                    component="myComponent1" componentIdType="myComponentIdType1"
                    executionEnvironment="myExec1" instanceId="myInstanceId1"
                    location="myLocation1" locationType="myLocationType1"
                    processId="102" subComponent="mySubComponent1"
                    threadId="123" />
             <msgDataElement msgLocale="en-US">
                <msgCatalogTokens value="2" />
                <msgId>myMsgId2</msgId>
                <msgIdType>myMsgIdType</msgIdType>
                <msgCatalogId>myMsgCatalogId</msgCatalogId>
                <msgCatalogType>myMsgCatalogType</msgCatalogType>
                <msgCatalog>myMsgCatalog</msgCatalog>
             </msgDataElement>
          </CommonBaseEvent>
       </CommonBaseEvents>


      From the tags you can see that the component reporting the event is
      myApplication. The component experiencing the problem is myComponent1
      within myApplication1. The reporting component does not necessarily have to be
      different than the one experiencing difficulty although this is the case in the example
      shown here.

      Now let's look at how you go from a raw log file to a file of Common Base Events.


A quick tour of Autonomic Computing
© Copyright IBM Corporation 1994, 2006. All rights reserved.                              Page 19 of 27
developerWorks®                                                                         ibm.com/developerWorks




     Using the Generic Log Adapter
     A Generic Log Adapter is essentially a script that runs against a raw log to create a
     Common Base Events log. To create that adapter, you use an editor that runs as an
     Eclipse plug-in. A tutorial on using the GLA to create adapters is available on
     developerWorks and a link is provided in Resources. A short view of the process
     looks like this:


            1.     You will need to have an Eclipse 3.0 or higher platform environment. If
                   you do not have one running on your machine already, download the
                   Eclipse Tooling package from the Autonomic Computing Toolkit.

            2.     Install the Generic Log Adapter and Log and Trace Analyzer Tooling
                   plug-in bundle.

            3.     Create a Simple project.

            4.     Create a new Generic Log Adapter File.

            5.     Select a template log to use in the creation of the adapter. This is a real
                   log of the type you will be transforming, so you can test the transformation
                   with real data as you define your adapter. If your goal were to create a
                   GLA for WebSphere Application Server log files, you would use a sample
                   WAS log file to create the GLA itself. Since the format stays the same for
                   all WAS log files, the adapter would work for subsequently generated
                   files. Each template must have its own context and a sensor is defined for
                   that context.

            6.     Define the extractor. The extractor contains the information necessary for
                   interpreting a specific log file, such as the convention that an error entry
                   starts with the text [error], including the characters used as separators,
                   and so on. You can also use regular expressions for complex string
                   handling to get the extractor to interpret the raw data.

            7.     Test the adapter on your template file, viewing the transformation of each
                   entry in the original log file to make sure the output is what you expect.
                   You should also look at the actual Common Base Events XML file to
                   make sure you don't need to tweak the transformation.

     Once you've created a Generic Log Adapter, you can use it from the command line
     to transform any file of the appropriate type, which allows you to embed this process
     into another application.


     The Log Trace Analyzer
     Once you have the logs in the appropriate format, the next step is to analyze them.
     You can make this process easier using the Log Trace Analyzer, which is set up as


A quick tour of Autonomic Computing
Page 20 of 27                                          © Copyright IBM Corporation 1994, 2006. All rights reserved.
ibm.com/developerWorks                                                                     developerWorks®



      part of the Generic Log Adapter installation.

      The LTA actually serves several purposes. In addition to actually analyzing
      individual log files, it enables you to correlate (bring two events together to create a
      situation) events both within and between logs, and to analyze logs on an ongoing
      basis by attaching to a Java Virtual Machine. It also enables you to plug in a custom
      "correlation engine" to determine the relationships between events. This can let you
      see how failures in one subsystem or service cascade over to create other
      situations, allowing you to identify the 'root cause' of an event.

      The LTA analyzes events by comparing them to a symptom database. A symptom
      database is a collection of potential problems, signs of those problems, and potential
      solutions. By comparing the events to the symptom database, the LTA can take a
      seemingly obscure event and tell you not only what it means, but in the case of an
      error, what to do about it.


      Using the Log Trace Analyzer
      In order to analyze a log (or logs) using the Log Trace Analyzer, you would take the
      following general steps:


             1.     Prepare a symptom database (or databases) by importing them and
                    specifying them as "in use." A symptom database can be local or it can be
                    on a remote server accessible by URL.

             2.     Import the log file itself.

             3.     If necessary, filter and/or sort the records to be analyzed. You can select
                    or exclude specific fields of information such as the severity, or you
                    can choose records based on specific criteria. For example, you might
                    want to include only events that were part of a specific thread or date.

             4.     Analyze the log. This action compares each record to the symptom
                    database, and if it finds a match, it highlights the record in blue. Clicking
                    the record shows the information from the symptom database.




             5.     If necessary, correlate the log. You can correlate events within a single
                    log, or you can correlate the events in a pair of logs. You can also
                    combine multiple logs into a single log in order to correlate more than two
                    logs.




A quick tour of Autonomic Computing
© Copyright IBM Corporation 1994, 2006. All rights reserved.                                   Page 21 of 27
developerWorks®                                                                      ibm.com/developerWorks




     Correlating events
     Correlation of events can take place within logs or between logs. Using the LTA, you
     can view both Log interaction diagrams (within a single log) and Log thread
     interaction diagrams (between two logs):




     The LTA supports the following types of correlation "out of the box":

               • Correlation by Time
               • Correlation by URLs and Time
               • Correlation by Application IDs and Time
               • Correlation by URLs, Application IDs and Time
               • Correlation by PMI Request Metrics
     You can also create a custom correlation engine with your own specific
     requirements.



A quick tour of Autonomic Computing
Page 22 of 27                                       © Copyright IBM Corporation 1994, 2006. All rights reserved.
ibm.com/developerWorks                                                              developerWorks®




      Section 6. The autonomic manager

      The Autonomic Management Engine (AME)
      Now that we've looked at a lot of the pieces that surround an autonomic solution,
      let's get to the heart of the matter.

      In order to make any autonomic solution work, you need managed resources, and in
      order to have managed resources, you need an autonomic manager. At the center
      of IBM's Autonomic Computing Toolkit is the Autonomic Management Engine (AME).
      AME provides you with the capability of handling all four aspects of the autonomic
      control loop.

      AME communicates with your application via Resource Models, as discussed in the
      next panel.


      Resource Models
      A Resource Model is, in many ways, like the universal adapter you can buy at the
      store when you lose the power supply for a piece of electronics. It has plugs of
      multiple sizes, and you can set the voltage and polarity to match the piece of
      equipment for which you want to provide power. A Resource Model serves the same
      purpose for "plugging in" an application and the Autonomic Management Engine.

      It comes down to this: AME needs to know how to access information from the
      application and how to provide information to the application. Is it described as
      common base events? Is it a proprietary log format? Should a particular process be
      monitored? A Resource Model defines all of that.

      The IBM Autonomic Computing Toolkit comes with a Resource Model that has been
      customized for use with Problem Determination Scenario. You can implement your
      own Resource Model by looking at the source code for the one that comes with the
      scenario.


      The Resource Model Builder
      The Resource Model Builder bundle, part of the IBM Autonomic Computing Toolkit,
      enables you to create a Resource Model for AME.




A quick tour of Autonomic Computing
© Copyright IBM Corporation 1994, 2006. All rights reserved.                              Page 23 of 27
developerWorks®                                                                        ibm.com/developerWorks




     The Resource Model includes information on the types of events to be monitored,
     where they come from, and thresholds at which actions will be triggered. For
     example, you might want to ignore an event unless it happens more than 5 times
     within an hour, or 5 polls in a row, and so on.

     Now let's look at some of the techniques used when coding for use by the AME.


     Interaction styles
     Over the course of any application's life, it will need to interact with resources in a
     variety of ways. In autonomic computing, we can sort these interactions into four
     classifications:

               • Sensor receive-state : In this case, the autonomic manager polls the
                 entity from which it wants to obtain information.
               • Sensor receive-notification : In this case, the autonomic manager
                 receives a message from the entity in question.
               • Effector perform-operation : In this case, the autonomic manager
                 issues a command to another managed resource to change states or
                 properties.
               • Effector call-out-request : In this case, the managed resource knows it's
                 supposed to do something, but it doesn't know what, so it contacts the
                 autonomic manager to find out.


A quick tour of Autonomic Computing
Page 24 of 27                                         © Copyright IBM Corporation 1994, 2006. All rights reserved.
ibm.com/developerWorks                                                                developerWorks®



      The first release of the Autonomic Computing Toolkit includes classes illustrating
      how the Sensor receive-notification and Effector perform-operation interaction styles
      can be implemented in an application.


      Management topic implementation classes
      The current version of the IBM Autonomic Computing Toolkit includes the
      Management Topic Implementation classes, as part of the Problem Determination
      Scenario. They include
      com.ibm.autonomic.manager.AutonomicManagerTouchPointSupport and
      com.ibm.autonomic.resource.ManagedResourceTouchPoint, which both
      extend Java's UnicastRemoteObject class. The advantage of this is your ability
      to invoke these objects remotely via the RMI registry. (See the Resources for more
      information on Java RMI.)

      Actual coding is beyond the scope of this tutorial, but consider this example of
      binding a touchpoint into the local RMI registry.
       package com.ibm.autonomic.scenario.pd.manager;
       import com.ibm.autonomic.manager.IManagerTouchPoint;
       import com.ibm.autonomic.manager.ManagerProcessControl;
       public class PDManagerProcessControl extends ManagerProcessControl
       {
           public PDManagerProcessControl()
           {
              super();
           }
             public void start()
             {
                boolean done = false;
                int count = 0;
                while ( !done )
                {
                   try
                   {
                      // create an instance of the problem determination
                      // manager touchpoint...this extends the abstract
                      // manager touch point class
                      IManagerTouchPoint mgrTouchPoint = new PDManagerTouchPoint();
                        // publish the manager touch point and connect to the
                        // resource...the method below is a convenience
                        // method when only one resource exists...there is
                        // also a method for multiple resources
                        connectAndPublish(mgrTouchPoint,
                                          "//localhost/ManagerTouchPoint",
                                          "//localhost/ResourceTouchPoint");
                                   done = true;
                                   System.out.println("Sucessfully connected to ResouceTouchPoint");
                     } catch (Throwable th) {
                        try {
                           Thread.sleep(5000);
                        } catch ( Exception e) {
                           e.printStackTrace();
                        }
                        if (count++ > 20 )
                        {
                           System.out.println(
                                  "Error connecting to managed resources:Tried 20 times");
                           th.printStackTrace();
                           done = false;
                                   }
                        System.out.println("Retrying");



A quick tour of Autonomic Computing
© Copyright IBM Corporation 1994, 2006. All rights reserved.                             Page 25 of 27
developerWorks®                                                                         ibm.com/developerWorks


                    }
                }
            }
            public static void main(String[] args) throws Exception
            {
                // create an instance and start it up
                PDManagerProcessControl mpc = new PDManagerProcessControl();
                  mpc.start();
            }
       }


     For more examples of how to use these classes, see the
     com.ibm.autonomic.scenario.pd.manager.PDAutonomicManagerTouchpointSupport
     and
     com.ibm.autonomic.scenario.pd.manager.PDManagedResourceTouchpoint
     classes in the Problem Determination Scenario.




     Section 7. Summary

     Summary
     The IBM Autonomic Computing Toolkit provides you with the tools and technologies
     you need to begin creating a solution that is self-healing, self-configuring,
     self-optimizing, and self-protecting. This tutorial provides an overview of the
     concepts integral to an understanding of how autonomic computing can be
     implemented, through autonomic control loops and autonomic tools. It also provides
     a treatment of the tools provided in the IBM Autonomic Computing Toolkit and how
     they relate to each other. These tools and technologies include:

                • Application install using a solution installation and deployment
                  technologies
                • Administering and configuring applications using the Integrated Solutions
                  Console
                • Common Base Events and the Generic Log Adapter
                • Symptom databases and the role of the Log Trace Analyzer
                • Autonomic management using the Autonomic Management Engine




A quick tour of Autonomic Computing
Page 26 of 27                                          © Copyright IBM Corporation 1994, 2006. All rights reserved.
ibm.com/developerWorks                                                                 developerWorks®




      Resources
      Learn
         • The developerWorks Autonomic Computing zone includes information for those
           who are new to autonomic computing, those who want to know more about the
           core technologies, and for those who want to know more about the Autonomic
           Computing Toolkit itself.
         • An autonomic computing roadmap gives you a firm grip of the concepts
           necessary to understand this tutorial.
         • Read more about Autonomic Computing Maturity Levels.
         • Learn about Common Base Events.
         • Learn more about Java Remote Method Invocation (Java RMI).
      Get products and technologies
         • You can also download the various Autonomic Toolkit Bundles.



      About the authors
      Daniel Worden
      Daniel Worden, a Studio B author, got his first root password in 1984. He has been
      installing, de-installing, configuring, testing, and breaking applications since that time
      as both a sys admin and manager of systems administration services. In 1996, he led
      a team that developed in-house a consolidated Web reporting tool for several dozen
      corporate databases. His Server Troubleshooting, Administration & Remote Support
      (STARS) team have been working with IBM as partners since 1998. His next book -
      Storage Networks From the Ground Up is available from Apress in April 2004. He
      can be reached at dworden@worden.net.




      Nicholas Chase
      Nicholas Chase, a Studio B author, has been involved in Web site development for
      companies such as Lucent Technologies, Sun Microsystems, Oracle, and the Tampa
      Bay Buccaneers. Nick has been a high school physics teacher, a low-level
      radioactive waste facility manager, an online science fiction magazine editor, a
      multimedia engineer, and an Oracle instructor. More recently, he was the Chief
      Technology Officer of an interactive communications company in Clearwater, Florida,
      and is the author of five books, including XML Primer Plus (Sams ). He loves to hear
      from readers and can be reached at nicholas@nicholaschase.com.




A quick tour of Autonomic Computing
© Copyright IBM Corporation 1994, 2006. All rights reserved.                               Page 27 of 27

Contenu connexe

En vedette

Autonomic Computing by- Sandeep Jadhav
Autonomic Computing by- Sandeep JadhavAutonomic Computing by- Sandeep Jadhav
Autonomic Computing by- Sandeep Jadhav
Sandep Jadhav
 

En vedette (7)

NSF CAC Cloud Interoperability Testbed Projects
NSF CAC Cloud Interoperability Testbed ProjectsNSF CAC Cloud Interoperability Testbed Projects
NSF CAC Cloud Interoperability Testbed Projects
 
Autonomics Computing (with some of Adaptive Systems) and Requirements Enginee...
Autonomics Computing (with some of Adaptive Systems) and Requirements Enginee...Autonomics Computing (with some of Adaptive Systems) and Requirements Enginee...
Autonomics Computing (with some of Adaptive Systems) and Requirements Enginee...
 
Autonomic Computing - Diagnosis - Pinpoint Summary
Autonomic Computing - Diagnosis - Pinpoint SummaryAutonomic Computing - Diagnosis - Pinpoint Summary
Autonomic Computing - Diagnosis - Pinpoint Summary
 
9. the semantic grid and autonomic grid
9. the semantic grid and autonomic grid9. the semantic grid and autonomic grid
9. the semantic grid and autonomic grid
 
Autonomic Computing by- Sandeep Jadhav
Autonomic Computing by- Sandeep JadhavAutonomic Computing by- Sandeep Jadhav
Autonomic Computing by- Sandeep Jadhav
 
AutonomicComputing
AutonomicComputingAutonomicComputing
AutonomicComputing
 
Autonomic computing seminar documentation
Autonomic computing seminar documentationAutonomic computing seminar documentation
Autonomic computing seminar documentation
 

Similaire à A quick tour of autonomic computing

software configuration management
software configuration managementsoftware configuration management
software configuration management
Fáber D. Giraldo
 
Drools Presentation for Tallink.ee
Drools Presentation for Tallink.eeDrools Presentation for Tallink.ee
Drools Presentation for Tallink.ee
Anton Arhipov
 
3Audit Software & Tools.pptx
3Audit Software & Tools.pptx3Audit Software & Tools.pptx
3Audit Software & Tools.pptx
jack952975
 

Similaire à A quick tour of autonomic computing (20)

Top 10 DevOps Areas Need To Focus
Top 10 DevOps Areas Need To FocusTop 10 DevOps Areas Need To Focus
Top 10 DevOps Areas Need To Focus
 
DevOps explained
DevOps explainedDevOps explained
DevOps explained
 
BSC Software & Software engineering-UNIT-IV
BSC Software & Software engineering-UNIT-IVBSC Software & Software engineering-UNIT-IV
BSC Software & Software engineering-UNIT-IV
 
construction management system final year report
construction management system final year reportconstruction management system final year report
construction management system final year report
 
Application Lifecycle Management (ALM).pdf
Application Lifecycle Management (ALM).pdfApplication Lifecycle Management (ALM).pdf
Application Lifecycle Management (ALM).pdf
 
Top 3 Useful Tools for DevOps Automation -
Top 3 Useful Tools for DevOps Automation -Top 3 Useful Tools for DevOps Automation -
Top 3 Useful Tools for DevOps Automation -
 
Autopilot automatic data center management
Autopilot automatic data center managementAutopilot automatic data center management
Autopilot automatic data center management
 
Week_01-Intro to Software Engineering-1.ppt
Week_01-Intro to Software Engineering-1.pptWeek_01-Intro to Software Engineering-1.ppt
Week_01-Intro to Software Engineering-1.ppt
 
software configuration management
software configuration managementsoftware configuration management
software configuration management
 
Introduction To Software Concepts Unit 1 & 2
Introduction To Software Concepts Unit 1 & 2Introduction To Software Concepts Unit 1 & 2
Introduction To Software Concepts Unit 1 & 2
 
Maveric - Automation of Release & Deployment Management
Maveric -  Automation of Release & Deployment ManagementMaveric -  Automation of Release & Deployment Management
Maveric - Automation of Release & Deployment Management
 
ISTQB Agile Tester - Agile Test Tools
ISTQB Agile Tester - Agile Test ToolsISTQB Agile Tester - Agile Test Tools
ISTQB Agile Tester - Agile Test Tools
 
SE
SESE
SE
 
Drools Presentation for Tallink.ee
Drools Presentation for Tallink.eeDrools Presentation for Tallink.ee
Drools Presentation for Tallink.ee
 
Making software development processes to work for you
Making software development processes to work for youMaking software development processes to work for you
Making software development processes to work for you
 
3Audit Software & Tools.pptx
3Audit Software & Tools.pptx3Audit Software & Tools.pptx
3Audit Software & Tools.pptx
 
SE Lecture 1.ppt
SE Lecture 1.pptSE Lecture 1.ppt
SE Lecture 1.ppt
 
SE Lecture 1.ppt
SE Lecture 1.pptSE Lecture 1.ppt
SE Lecture 1.ppt
 
ch1.ppt
ch1.pptch1.ppt
ch1.ppt
 
Ansible.pptx
Ansible.pptxAnsible.pptx
Ansible.pptx
 

Dernier

Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Victor Rentea
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Victor Rentea
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Dernier (20)

Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUKSpring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024
 

A quick tour of autonomic computing

  • 1. A quick tour of Autonomic Computing Skill Level: Introductory Daniel Worden Author Nicholas Chase Author 07 Apr 2004 Autonomic computing architecture is a range of software technologies that enable you to build an information infrastructure that can, to lesser and greater degrees, manage itself, saving countless hours (and dollars) in human management. And all this without giving up control of the system. This tutorial explains the concepts behind autonomic computing and looks at the tools at your disposal for making it happen - today. Section 1. Before you start About this tutorial This tutorial explains the general concepts behind autonomic computing architectures, including control loops and autonomic-managed resources. It also discusses the various tools that are currently available within the IBM Autonomic Computing Toolkit, providing a concrete look at some of the tasks you can accomplish with it. The tutorial covers: • Autonomic computing concepts • Contents of the IBM Autonomic Computing Toolkit • Installing applications using solution installation and deployment technologies • Application administration using the Integrated Solutions Console A quick tour of Autonomic Computing © Copyright IBM Corporation 1994, 2006. All rights reserved. Page 1 of 27
  • 2. developerWorks® ibm.com/developerWorks • Problem determination using the Generic Log Adapter and the Log and Trace Analyzer • A brief look at Common Base Events • Autonomic management using the Autonomic Management Engine (AME) • AME Resource Models and a brief look at programming using AME When you have completed this tutorial, you should feel comfortable looking further into any of these topics, while understanding their place in the autonomic computing infrastructure. This tutorial is for developers who not only want to understand the concepts behind autonomic computing architectures, but who also want begin implementing autonomic computing solutions in their applications with the help of the IBM Autonomic Computing Toolkit. Although this tutorial is aimed at developers, no actual programming is required in order to get the full value from it. When present, code samples are shown using the Java language, but an in-depth understanding of the code is not required to gain insight into the concepts they illustrate. Prerequisites This tutorial doesn't actually require you to install any tools, but it does look at the following autonomic computing bundles, available from developerWorks at: http://www.ibm.com/developerworks/autonomic/probdet1.html. • Autonomic Management Engine • Resource Model Builder • Integrated Solutions Console • Solution installation and deployment scenarios • Generic Log Adapter and Log and Trace Analyzer Tooling • Generic Log Adapter Runtime and Rule Sets • Problem Determination Scenario • Eclipse Tooling Section 2. What is autonomic computing? A quick tour of Autonomic Computing Page 2 of 27 © Copyright IBM Corporation 1994, 2006. All rights reserved.
  • 3. ibm.com/developerWorks developerWorks® The current state of affairs As long-time developers, we've found that the number one most underestimated area of software development is maintenance. Everybody wants to build the application, but once it's out there, developers just want to forget about it. They tend not to think about the administrators who have to spend hours and days and even weeks configuring their systems for the software to work right, or to prevent problems such as full drives and bad equipment. As a general rule, a developer is only worried about what needs to be done to the system to install his or her own product; the effects on other, previously installed products are frequently overlooked. What's more, optimizing a setup can be a risky process, with changes made in one area causing problems in another. Management of multiple systems usually means learning several different management applications. How many hours, and by extension dollars (or Euros, or yen, or rupees) are consumed just keeping software up and running? Surely there are more productive things these human resources can be doing, such as planning to make the business run better? In short, there's a need in today's computing infrastructure for a better way to do things. One way to reach that goal is through autonomic computing technology. Where we want to go A system built on autonomic computing concepts tries to eliminate much of that potentially wasted or duplicated effort. Look at the current state of affairs, but this time, consider how to apply the principles of autonomic computing technology. First, the software installation process should truly be a one-click process. The software would know not only what dependencies must be fulfilled for its own operation, but also how to get those dependencies met if the system didn't already have them in place. Examples would include such things as downloading operating system patches and provisioning additional drive space. The install would know how to avoid conflicts with already deployed applications. Once installed, it would be managed from a single, common, management interface used for all applications. After the software was up and running, it would consistently monitor itself and the environment for problems. When detected, it would determine the source of the error and resolve it. All of this would be managed from a consolidated point, so the CEO or system administrator could see at a glance who had access to what, even though resources might be spread over multiple applications, platforms and locations. In other words, you're looking for a system that is self-configuring, self-healing, self-protecting, and self-optimizing. So how do you get it? Autonomic infrastructure A quick tour of Autonomic Computing © Copyright IBM Corporation 1994, 2006. All rights reserved. Page 3 of 27
  • 4. developerWorks® ibm.com/developerWorks In order to create such a system, you've got to create an infrastructure that enables a system to monitor itself and take action when changes occur. To accomplish that, an autonomic computing solution is made up of a an autonomic manager and managed resources. The autonomic manager and the managed resources talk to each other using the standard Sensor and Effector APIs delivered through the managed resource touchpoint. The managed resources are controlled system components that can range from single resources such as a server, database server, or router to collections of resources like server pools, clusters, or business applications. The autonomic manager implements autonomic control loops by dividing them into four parts: monitor, analyze, plan, and execute. The control loop carries out tasks as efficiently as possible based on high-level policies. • Monitor: Through information received from sensors, the resource monitors the environment for specific, predefined conditions. These conditions don't have to be errors; they can be a certain load level or type of request. • Analyze: Once the condition is detected, what does it mean? The resource must analyze the information to determine whether action should be taken. • Plan: If action must be taken, what action? The resource might simply notify the administrator, or it might take more extensive action, such as provisioning another hard drive. • Execute: It is this part of the control loop that sends the instruction to the effector, which actually affects or carries out the planned actions. Figure 1. Autonomic control loops A quick tour of Autonomic Computing Page 4 of 27 © Copyright IBM Corporation 1994, 2006. All rights reserved.
  • 5. ibm.com/developerWorks developerWorks® All of the actions in the autonomic control loop either make use of or supplement the knowledge base for the resource. For example, the knowledge base helps the analysis phase of the control loop to understand the information it's getting from the monitor phase. It also provides the plan phase with information that helps it select the action to be performed. Note that the entire autonomic control loop need not be handled by a single application. For example, you might have one application (such as IBM Director) that handles the Monitor and Analyze phases, while a second product (such as Toshiba ClusterPerfect) handles the plan and execute phases. Autonomic maturity levels It would be unreasonable to expect all software and system administration processes to suddenly go from a completely manual state to a completely autonomic state. Fortunately, with the autonomic computing model, that's not a requirement. In fact, there are typically five levels of autonomic maturity, which are measures of where any given process is on that spectrum. These maturity levels are: • Basic - Personnel hold all of the important information about the product and environment. Any action, including routine maintenance, must be planned and executed by humans. • Managed - Scripting and logging tools automate routine execution and reporting. Individual specialists review information gathered by the tools to A quick tour of Autonomic Computing © Copyright IBM Corporation 1994, 2006. All rights reserved. Page 5 of 27
  • 6. developerWorks® ibm.com/developerWorks make plans and decisions. • Predictive - As preset thresholds are tripped, the system raises early warning flags and recommends appropriate actions from the knowledge base. The centralized storage of common occurrences and experience also leverages the resolution of events. • Adaptive - Building on the predictive capabilities, the adaptive system is empowered to take action based on the situation. • Autonomic - Policy drives system activities, including allocation of resources within a prioritization framework and acquisition of prerequisite dependencies from outside sources. Most systems today are at the basic or managed level, though there are, of course, exceptions. Take a look at areas in which systems most need improvement, and the tools that can makes this possible. The Autonomic Computing Toolkit To make it easier to develop an autonomic solution, IBM has put together the Autonomic Computing Toolkit. Strictly speaking, this is not a single toolkit, but rather several "bundles" of applications intended to work together to produce a complete solution. These bundles include: • Autonomic Management Engine • Resource Model Builder • Integrated Solutions Console • Solution installation and deployment • Generic Log Adapter and Log and Trace Analyzer You can use each of these tools individually, or you can combine them for a larger scope solution. In fact, the toolkit also includes a number of "scenarios" that demonstrate the use of some of these applications and their integration, including: • Problem Determination Scenario • Solution installation and deployment scenario using ISSI • Solution installation and deployment scenario using InstallAnywhere • Solution installation and deployment samples scenario The toolkit also includes the Autonomic Computing Information bundle, which includes much of the available documentation for the other bundles. If you do not already have Eclipse 3.0 or higher, the toolkit provides an Eclipse Tooling bundle that includes Eclipse runtimes needed by toolkit components. A quick tour of Autonomic Computing Page 6 of 27 © Copyright IBM Corporation 1994, 2006. All rights reserved.
  • 7. ibm.com/developerWorks developerWorks® Now, take a look at what these bundles provide. Installation and administration Before you even think about problem resolution (which we'll talk about next) a software application has to be up and running. That means installation and configuration. We'll start with installation. The solution installation and deployment capabilities that come with the IBM Autonomic Computing Toolkit enable you to build an installation engine that can detect missing dependencies and take action. It also tracks the applications that have been installed and their dependencies, so subsequent installs - and uninstalls -- don't step on each other, even if they come from different vendors. (As long as they're all autonomically aware, of course.) It does that by maintaining a database of conditions, and checking it before performing the installation. It also communicates with the platform through touchpoints . These server-specific interfaces provide data on available resources, installed components, and their relationships. Touchpoints are a key enabler for autonomic dependency checking. For more information on the dependency checker that comes with the solution installation and deployment technologies, see Installing a solution . Once you've installed the software, you'll need to administer it. Rather than forcing you to learn a new management console for every application, the IBM Autonomic Computing Toolkit provides the Integrated Solutions Console. This console is based on a portal infrastructure, so any autonomically aware application can install components for the administrator to use within this single console. The Integrated Solutions Console also provides for centralized permissions management through access to a credential vault. In this way, single sign-on, system access, and admnistration can be consolidated and controlled for multiple servers. For more information on the Integrated Solutions Console, see The Integrated Solutions Console. Problem determination and analysis The software is installed, configured, and running. This is of course when something goes horribly wrong. With the old way of doing things, you'd have to sort through possibly megabytes of log files looking for something you personally might not even recognize. In an autonomic solution, events such as log entries are presented in a common form (the Common Base Events format), correlated into a situation, and compared to a symptom database to determine the problem and potential actions. To accomplish all that, the Autonomic Computing Toolkit includes the Generic Log Adapter (GLA) and the Log and Trace Analyzer. The GLA converts logs from legacy applications into Common Base Events that can then be read by the LTA. The LTA consults a symptom database to correlate the events, identify the problem, and perform the actions prescribed for that problem. For more information on these tools, see The Generic Log Adapter. A quick tour of Autonomic Computing © Copyright IBM Corporation 1994, 2006. All rights reserved. Page 7 of 27
  • 8. developerWorks® ibm.com/developerWorks Analysis can take place in a number of ways. For example, you might write JavaBeans or other code to analyze events and situations. Or you might find it simpler to use a tool that performs much of the analysis for you, such as the Autonomic Management Engine. Monitoring and centralized management The Autonomic Management Engine (AME) is meant to be a fairly self-contained autonomic computing solution, in that it can monitor the environment and an application, analyze any conditions that surface, plan a response, and execute it. It is an extensive monitoring engine that uses resource models as its source of information. A resource model tells AME what to look for, such as the termination of a process or a particular log event, yet because it is flexible, AME can monitor any application. All you need to do is provide an appropriate resource model. To do that, you can use the IBM Autonomic Computing Toolkit's Resource Model Builder, which is a plugin for the Eclipse IDE. Optimization and other issues Some areas of the autonomic computing vision are not yet available as part of this version of the IBM Autonomic Computing Toolkit, but can be realized using independent products. For example, a major goal of autonomic computing technology is to create a system that is self-optimizing. An orchestration product such as IBM Tivoli#173; Intelligent ThinkDynamic Orchestrator lets you monitor the overall system using autonomic computing concepts to determine when an action needs to be taken. The Orchestrator then takes appropriate action as defined by the administrator or per business policy. You can also provide for autonomic management of provisioning issues such as allocating or deallocating servers based on the relative loads of applications and their importance using a product such as Adaptive Replacement Cache from IBM, which yields significant performance increases over least-recently-used (LRU) caching strategies. Whether you are coding multiserver load balancing applications, or simply looking for ways to improve database query performance, the autonomic computing model from IBM provides tools and techniques to move your code to the next level of autonomic maturity. Another goal of autonomic computing technology is heterogeneous workload management, in which the entire application as a whole can be optimized. One example would be a system that includes a web application server, a database server, and remote data storage. Heterogeneous workload management makes it possible to find bottlenecks in the overall system and resolve them. Products such as the Enterprise Workload Manager (EWLM) component of the IBM Virtualization Engine provide hetergeneous workload management capabilities. They enable you A quick tour of Autonomic Computing Page 8 of 27 © Copyright IBM Corporation 1994, 2006. All rights reserved.
  • 9. ibm.com/developerWorks developerWorks® to automatically monitor and manage multi-tiered, distributed, heterogeneous or homogeneous workloads across an IT infrastructure to better achieve defined business goals for end-user services. Now, take a closer look at the tools that make up the Autonomic Computing Toolkit. Section 3. Installing a solution What solution installation and deployment technologies do When you install a software package, you generally have three goals: • Satisfy any dependencies or prerequisites • Install the actual software • Don't break anything that already works This last goal is often the toughest part. With today's complex software, version conflicts often cause problems. Solution installation and deployment technologies solve many of these problems by maintaining a database of software that's already been installed and the relevant dependencies. When you first attempt to install a new piece of software, the solution installation and deployment technologies check to see if the prerequisites are met by using web services to access "touchpoints," which provide information such as the available RAM and disc space, operating system, and so on. It also checks the database of installed software for other required applications, as well as checking to make sure that installing this new application isn't going to conflict with prerequisites for any existing applications. When the installation is complete, the details of the new application are added to the solution installation and deployment database so that all the information is available for the next install. This information also comes in handy when you attempt to uninstall a software solution; a central database of prerequisites makes it easier to know what you can and cannot safely delete. Installable units The first thing to understand about solution installation and deployment is that there are two major pieces involved. The first is the actual installer. This is the infrastructure that includes the installation database and the code that interprets and A quick tour of Autonomic Computing © Copyright IBM Corporation 1994, 2006. All rights reserved. Page 9 of 27
  • 10. developerWorks® ibm.com/developerWorks executes the actions specified in the package itself. The second is the package of files to be installed. And what about this package? What is it? Depending on the situation, a package can consist of a solution module (SM), which can contain other solution modules as well as one or more container installable units. A container installable unit (CIU) can contain zero or more other container installable units, and one or more smallest installable units. The smallest installable unit is exactly what it sounds like: a single piece of software that includes files to be copied to the target system. It's important to understand that only the SIU contains actual physical files or groups of files. The CIU and SM are concepts, specified in the Installable Unit's deployment descriptor. This is an XML file that includes information on dependencies and on actions that must take place in order for the software to be successfully installed. Let's look at the process of creating a simple package to be installed via the solution installation and deployment environment. Create a package Creating the package of software to install entails the following basic steps: • Install the solution installation and deployment bundle. • Install the Solution Module Descriptor plug-in for Eclipse. (This is a convenience step; the descriptor is simply an XML file, so you can create it with a text editor, but it's complex so you're better off using an editor that understands its structure.) A quick tour of Autonomic Computing Page 10 of 27 © Copyright IBM Corporation 1994, 2006. All rights reserved.
  • 11. ibm.com/developerWorks developerWorks® • Create a project that includes the actual files to install. This may be as straightforward as a text file or as complex as WebSphere Portal. In any case, this project is independent of any solution installer files; it's just the application itself and the application becomes the smallest installable unit. • Create the deployment descriptor. This step includes the following tasks: • Create the container installable unit. • Add the SIU to the CIU. • Set any SIU dependencies. • Add actions such as "add file" from the packaged ZIP file to the file system. • Set parameters such as install_dir. • Set the target, including operating system information. The descriptions here are intended only to give you an overview of what's involved. For a complete look at creating a solution module and deploying it, see the solution installation and deployment tutorial. (A link is available in Resources.) A sample descriptor When you're done, you'll have a deployment descriptor something like this one, abridged from the InstallShield for Multiplatforms scenario: <?xml version="1.0" encoding="UTF-8"?> <com.ibm.iudd.schemas.smd:rootIU xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:com.ibm.iudd.schemas.os.container.actions= "http://www.ibm.com/IUDD/schemas/OSContainerActions" xmlns:com.ibm.iudd.schemas.smd="http://www.ibm.com/IUDD/schemas/SMD"> <solutionModule IUname="Sample Install"> <IUdefinition> <identity> <name>Sample Install</name> <UUID>123456789012345678901234567</UUID> <version lev="4" ver="0"/> <displayName language_bundle="Sample.properties"> <short_default_text key="SAMPLE">"Sample Install"</short_default_text> <long_default_text key="Sample_Install_DEMO">"Sample Install Demonstration"</long_default_text> </displayName> <manufacturer> <short_default_text key="IBM" >"IBM"</short_default_text> <long_default_text key="IBM" >"IBM"</long_default_text> </manufacturer> </identity> </IUdefinition> <variables> <variable name="install_from_path"> <parameter defaultValue="install/setup"/> <description> <short_default_text key="JAVA_FROM_PATH"> "Getting SI demo files from path:" </short_default_text> A quick tour of Autonomic Computing © Copyright IBM Corporation 1994, 2006. All rights reserved. Page 11 of 27
  • 12. developerWorks® ibm.com/developerWorks </description> </variable> <variable name="install_dir"> <parameter defaultValue="/SIDemo"/> <description> <short_default_text key="JAVA_TO_PATH"> "Installing SI Demo to directory:" </short_default_text> </description> </variable> </variables> <installableUnit> <CIU IUname="WAS_Express_5_0" type= "http://w3.ibm.com/namespaces/2003/OS_componentTypes:MicrosoftWindows"> <IUdefinition> <identity> <name> WAS Express 5.0.2</name> <UUID>123456789012345678901234501</UUID> <version mod="6" rel="0" ver="4"/> <displayName> <short_default_text key="WASExpress">"WAS Express 5.0.2"</short_default_text> </displayName> <manufacturer> <short_default_text key="IBM_WAS">"IBM WAS Express Group"</short_default_text> </manufacturer> </identity> </IUdefinition> <variables> <variable name="ibm_install_from_path"> <parameter defaultValue="install/setup/WAS_Exp"/> <description> <short_default_text key="INSTALL_FROM_PATH" >"Installing WAS Express from path: "</short_default_text> </description> </variable> <variable name="ibm_install_drive"> <parameter defaultValue="$(install_drive)"/> <description> <short_default_text key="INSTALL_TO_DRIVE" >"Installing WAS Express on drive: "</short_default_text> </description> </variable> </variables> <installableUnit> <SIU IUname="WAS_Express_Application_Server" type= "http://w3.ibm.com/namespaces/2003/OS_componentTypes:MicrosoftWindows"> <IUdefinition> <identity> <name>J2EE Servlet Container</name> <UUID>123456789012345678901234502</UUID> <version rel="0" ver="1"/> <displayName> <short_default_text key="J2EE_SERV" >"J2EE Servlet Container"</short_default_text> </displayName> <manufacturer> <short_default_text key="IBM_WAS" >"WAS"</short_default_text> </manufacturer> </identity> <checks> <capacity checkVarName="J2EE_Serv_Processor_Speed" type="minimum"> <description> <short_default_text key="CHECK_CPU" >"Checking CPU speed..."</short_default_text> </description> <propertyName>Processor</propertyName> <value>100</value> </capacity> <consumption checkVarName="J2EE_Serv_Memory"> <description> <short_default_text key="CHECK_RAM" >"Checking available RAM..."</short_default_text> A quick tour of Autonomic Computing Page 12 of 27 © Copyright IBM Corporation 1994, 2006. All rights reserved.
  • 13. ibm.com/developerWorks developerWorks® </description> <propertyName>TotalVisibleMemorySize</propertyName> <value>128</value> </consumption> <consumption checkVarName="J2EE_Serv_Disk_Space"> <description> <short_default_text key="CHECK_DSK" >"Checking available disk space..."</short_default_text> </description> <propertyName>Partition</propertyName> <value>18.5</value> </consumption> <custom customCheckIdRef="AFileExistsCheck" checkVarName="True_File_Check"> <parameter variableNameRef="classpath">.</parameter> <parameter variableNameRef="filename">.</parameter> </custom> <custom customCheckIdRef="AFileExistsCheck" checkVarName="False_File_Check"> <parameter variableNameRef="classpath">.</parameter> <parameter variableNameRef="filename" >/Dummy.txt</parameter> </custom> </checks> <requirements> <requirement name="J2EE_Serv_Requirements" operations="InstallConfigure"> <alternative> <checkItem checkVarName="J2EE_Serv_Processor_Speed" /> <checkItem checkVarName="J2EE_Serv_Memory"/> <checkItem checkVarName="J2EE_Serv_Disk_Space"/> <checkItem checkVarName="True_File_Check"/> <checkItem checkVarName="False_File_Check"/> </alternative> </requirement> </requirements> </IUdefinition> <variables> <variable name="J2EE_Serv_Processor_Speed"/> <variable name="J2EE_Serv_Memory"/> <variable name="J2EE_Serv_Disk_Space"/> <variable name="True_File_Check"/> <variable name="False_File_Check"/> <variable name="Filename"/> </variables> <unit xsi:type= "com.ibm.iudd.schemas.os.container.actions:OsActionGroup"> <actions> <addDirectory sequenceNumber="0" add="true" descend_dirs="true"> <destination>$(ibm_install_dir)/WAS Express 5.0</destination> </addDirectory> <addDirectory sequenceNumber="0" add="true" descend_dirs="true"> <destination>$(ibm_install_dir)/WAS Express 5.0/bin</destination> <source> <source_directory >$(ibm_install_from_path)/was/bin</source_directory> </source> </addDirectory> <addDirectory sequenceNumber="1" add="true" descend_dirs="true"> <destination>$(ibm_install_dir)/WAS Express 5.0/classes</destination> </addDirectory> <addDirectory sequenceNumber="2" add="true" descend_dirs="true"> <destination>$(ibm_install_dir)/WAS Express 5.0/config</destination> A quick tour of Autonomic Computing © Copyright IBM Corporation 1994, 2006. All rights reserved. Page 13 of 27
  • 14. developerWorks® ibm.com/developerWorks <source> <source_directory >$(ibm_install_from_path)/was/config</source_directory> </source> </addDirectory> </actions> </unit> </SIU> </installableUnit> <installableUnit> <SIU IUname="Common_WAS_Express_Libraries" type= "http://w3.ibm.com/namespaces/2003/OS_componentTypes:MicrosoftWindows"> <IUdefinition> <identity> <name>Common WAS Express Libraries</name> <UUID>123456789012345678901234503</UUID> <version mod="6" rel="0" ver="4"/> <displayName> <short_default_text key="WAS_LIB">"Common WAS Express Libraries"</short_default_text> </displayName> <manufacturer> <short_default_text key="IBM_WAS">"WAS Express"</short_default_text> </manufacturer> </identity> </IUdefinition> <unit xsi:type= "com.ibm.iudd.schemas.os.container.actions:OsActionGroup"> <actions> <addDirectory sequenceNumber="0" add="true" descend_dirs="true"> <destination>$(ibm_install_dir)/WAS Express 5.0/lib</destination> <source> <source_directory >$(ibm_install_from_path)/was/lib</source_directory> </source> </addDirectory> </actions> </unit> </SIU> </installableUnit> </CIU> <target>OperatingSystem</target> </installableUnit> </solutionModule> <rootInfo> <schemaVersion mod="1" rel="1" ver="1"/> <build>0</build> <size>0</size> </rootInfo> <customChecks> <customCheck customCheckId="AFileExistsCheck"> <invocation_string>java -cp $(classpath) FileExists $(filename) </invocation_string> <std_output_file>AFileExistsCheck.out</std_output_file> <std_error_file>AFileExistsCheck.err</std_error_file> <completion_block> <return_code to="0" from="0"> <severity>SUCCESS</severity> </return_code> <return_code to="1" from="2"> <severity>FAILURE</severity> </return_code> </completion_block> <working_dir>.</working_dir> <timeout>100</timeout> </customCheck> </customChecks> <topology> <target id="OperatingSystem" type= "http://w3.ibm.com/namespaces/2003/OS_componentTypes:Operating_System"/> <target id="WindowsTarget" type= "http://w3.ibm.com/namespaces/2003/OS_componentTypes:MicrosoftWindows"/> <target id="LinuxTarget" type= "http://w3.ibm.com/namespaces/2003/OS_componentTypes:Linux"/> A quick tour of Autonomic Computing Page 14 of 27 © Copyright IBM Corporation 1994, 2006. All rights reserved.
  • 15. ibm.com/developerWorks developerWorks® <target id="AIXTarget" type= "http://w3.ibm.com/namespaces/2003/OS_componentTypes:IBMAIX"/> </topology> <groups> <group> <description language_bundle="SIDemo.properties"> <short_default_text key="GROUP_KEY"/> </description> </group> </groups> </com.ibm.iudd.schemas.smd:rootIU> Installing a package IBM has partnered with two of the leading providers of software installation products, InstallShield and Zero G, to integrate these autonomic solution installation and deployment technologies into their products. These vendors' installer products are demonstrated in scenarios provided with the Autonomic Computing Toolkit. For an in-depth explanation of the scenarios and how they work, see the Autonomic Computing Toolkit Scenario Guide. Section 4. Administering software The Integrated Solutions Console Once you've installed the software, you have to administer it. In a worst-case scenario, every application has its own administration console, a fat client that is installed on a particular machine. To administer that application, the right person has to sit in front of the right machine. And in each case, the administrator has to learn the operation of the unique console and manage permissions for other users through it. The Integrated Solutions Console solves a lot of these problems. First, it's a browser-based application, so the "right person" can administer it from anywhere, as long as the computer he or she is sitting in front of can access the server on which Integrated Solutions Console is installed over the network. Second, you can use it to manage all of your autonomically aware applications, so learning is limited to navigating the features of this single console. You can even manage administration permissions for accessing other applications through the Integrated Solutions Console. Let's take a look at how it works. The Integrated Solutions Console architecture A quick tour of Autonomic Computing © Copyright IBM Corporation 1994, 2006. All rights reserved. Page 15 of 27
  • 16. developerWorks® ibm.com/developerWorks The Integrated Solutions Console is based on IBM's WebSphere Portal product, in that all of the components you use to manage your applications are built as portlets and integrated into that web-based application. This architecture enables you to take advantage of the framework that is already in place for Portal, such as the Credential Vault, discussed in Managing security. As a Web-based application, it can be used from multiple machines and by multiple users (as long as they have the appropriate permissions), and any number of applications can be integrated into it through the use of components constructed in WSAD. Components are based on portlets, which are themselves specialized versions of Java servlets. We'll discuss the creation of a component in more detail in Creating a component, but the basic idea is that you create a small server-side application and use XML configuration files to tell the Integrated Solutions Console what to do with it. Now let's look at actually creating a component. Creating a component A quick tour of Autonomic Computing Page 16 of 27 © Copyright IBM Corporation 1994, 2006. All rights reserved.
  • 17. ibm.com/developerWorks developerWorks® It's impossible to describe the complete process of creating an Integrated Solutions Console component in just one section -- see Resources for a tutorial on doing just that - but here's an overall view of the process: 1. Install the Integrated Solutions Console Toolkit for WebSphere Application Developer, Portal Toolkit and the Eclipse help framework with the Integrated Solutions Console installation routine. 2. Using WSAD, create a portlet project. 3. Create a portlet or portlets to manage your application. 4. Create an Integrated Solutions Console Component Project. 5. Assign the project to an administration suite. 6. Add the portlets to the component. 7. Determine the placement for each portlet. Some portlets are displayed in the work pane, some are shown as the result of a message sent from another portlet, and some are only opened within other portlets. 8. Determine the layout of the work pane and the order items appear on the navigation tree. 9. Set security for the component. 10. Add help resources. 11. Export the component project as a .WAR file and deploy it to the Integrated Solutions Console server. 12. Connect to the Integrated Solutions Console and operate your component. Managing security One of the strengths of the Integrated Solutions Console is the way in which it eases the burden of managing permissions for administrative tasks. In addition to the fact that permissions for all of your autonomically aware applications can be managed from within the console, you also have the advantage of the layers of abstraction that Integrated Solutions Console uses. For one thing, you never actually give an individual user permissions for a specific resource. Rather, you assign users to groups, or "entitlements," and then you give the entitlements permissions on the portlets that control the resource. In other words, you can create an entitlement such as MagnetPeople that has administrative access to some MagnetApp components and user access to others. To change the set of people who have these permissions, you can add people to or remove people A quick tour of Autonomic Computing © Copyright IBM Corporation 1994, 2006. All rights reserved. Page 17 of 27
  • 18. developerWorks® ibm.com/developerWorks from the MagnetPeople group. This structure provides a layer of insulation whose value may not be immediately obvious. If you want to give user Sarah the ability to make changes to a DB2 database, under normal circumstances you would have to provide her with a username and password for the database itself. In the case of the Integrated Solutions Console, you give her permissions on the portlet, and the portlet accesses the database. However, the portlet doesn't need to have a username and password for the database either. Instead, it can access the Credential Vault (a feature of WebSphere Portal that has been included in the Integrated Solutions Console). The credential vault "slot" has all of the information needed to access the resource - in this case, the database - so neither the user nor the portlet the user is accessing ever needs to know the actual authentication information. You can use the credential vault not just for userids and passwords, but also for certificates and private keys. Section 5. Logging and problems The Generic Log Adapter The goal of an autonomic system is to create an environment in which all applications can be monitored and tracked from within a single location, such as the Integrated Solutions Console. For that to happen, the overall system needs to be able to understand messages put out by an application and any servers on which it depends. That means that logging of exceptions and events must first be translated into some sort of common format. In the case of the IBM Autonomic Computing Toolkit, that format is called the Common Base Events model. We'll talk about that in more detail next, but first let's deal with the issue of how we get to that point. If you were building an application from scratch, it would make sense to output all events in the Common Base Events format so they could be immediately analyzed by the autonomic system. If, however, you have a legacy application that already puts out events in some other form, you have a problem. You can either rewrite the application to change the log format, which isn't likely, or you can choose to convert the logs it does produce to the Common Base Events format. The Generic Log Adapter serves this purpose. The GLA is designed to take any text formatted log and convert those entries into Common Base Events so that it can be used as input by other autonomic tools. To do that you define the transformation within Eclipse by creating an adaptor (as we'll discuss in Using the Generic Log Adapter), and then run the GLA against the desired target log. Before we look at how to do that, let's look at what we're trying to accomplish. A quick tour of Autonomic Computing Page 18 of 27 © Copyright IBM Corporation 1994, 2006. All rights reserved.
  • 19. ibm.com/developerWorks developerWorks® Common Base Events Common Base Events are just that: a consistent, published way to express the events that normally happen during the functioning of an application, such as starting or stopping a process or connecting to a resource. The Common Base Events specification creates specific situation types, such as START, STOP, FEATURE, and CREATE, and specifies how they must be represented. The Common Base Events specification also provides a way to identify components experiencing or observing events, as well as the correlation of events into situations that describe them. Common Base Events are expressed as XML, such as this example adapted from the specification: <?xml version="1.0" encoding="UTF-8"?> <CommonBaseEvents xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="commonbaseevent1_0.xsd"> <AssociationEngine id="a0000000000000000000000000000000" name="myassociationEngineName" type="Correlated" /> <CommonBaseEvent creationTime="2001-12-31T12:00:00" elapsedTime="0" extensionName="CommonBaseEvent" globalInstanceId="i0000000000000000000000000000000" localInstanceId="myLocalInstanceId" priority="0" repeatCount="0" sequenceNumber="0" severity="0" situationType="mySituation"> <contextDataElements name="Name" type="myContextType"> <contextValue>contextValue</contextValue> </contextDataElements> <extendedDataElements name="z" type="Integer"> <values>1</values> </extendedDataElements> <associatedEvents associationEngine="a0000000000000000000000000000000" resolvedEvents="i0000000000000000000000000000001" /> <reporterComponentId application="myApplication" component="myComponent" componentIdType="myComponentIdType" executionEnvironment="myExec" instanceId="myInstanceId" location="myLocation" locationType="myLocationType" processId="100" subComponent="mySubComponent" threadId="122" /> <sourceComponentId application="myApplication1" component="myComponent1" componentIdType="myComponentIdType1" executionEnvironment="myExec1" instanceId="myInstanceId1" location="myLocation1" locationType="myLocationType1" processId="102" subComponent="mySubComponent1" threadId="123" /> <msgDataElement msgLocale="en-US"> <msgCatalogTokens value="2" /> <msgId>myMsgId2</msgId> <msgIdType>myMsgIdType</msgIdType> <msgCatalogId>myMsgCatalogId</msgCatalogId> <msgCatalogType>myMsgCatalogType</msgCatalogType> <msgCatalog>myMsgCatalog</msgCatalog> </msgDataElement> </CommonBaseEvent> </CommonBaseEvents> From the tags you can see that the component reporting the event is myApplication. The component experiencing the problem is myComponent1 within myApplication1. The reporting component does not necessarily have to be different than the one experiencing difficulty although this is the case in the example shown here. Now let's look at how you go from a raw log file to a file of Common Base Events. A quick tour of Autonomic Computing © Copyright IBM Corporation 1994, 2006. All rights reserved. Page 19 of 27
  • 20. developerWorks® ibm.com/developerWorks Using the Generic Log Adapter A Generic Log Adapter is essentially a script that runs against a raw log to create a Common Base Events log. To create that adapter, you use an editor that runs as an Eclipse plug-in. A tutorial on using the GLA to create adapters is available on developerWorks and a link is provided in Resources. A short view of the process looks like this: 1. You will need to have an Eclipse 3.0 or higher platform environment. If you do not have one running on your machine already, download the Eclipse Tooling package from the Autonomic Computing Toolkit. 2. Install the Generic Log Adapter and Log and Trace Analyzer Tooling plug-in bundle. 3. Create a Simple project. 4. Create a new Generic Log Adapter File. 5. Select a template log to use in the creation of the adapter. This is a real log of the type you will be transforming, so you can test the transformation with real data as you define your adapter. If your goal were to create a GLA for WebSphere Application Server log files, you would use a sample WAS log file to create the GLA itself. Since the format stays the same for all WAS log files, the adapter would work for subsequently generated files. Each template must have its own context and a sensor is defined for that context. 6. Define the extractor. The extractor contains the information necessary for interpreting a specific log file, such as the convention that an error entry starts with the text [error], including the characters used as separators, and so on. You can also use regular expressions for complex string handling to get the extractor to interpret the raw data. 7. Test the adapter on your template file, viewing the transformation of each entry in the original log file to make sure the output is what you expect. You should also look at the actual Common Base Events XML file to make sure you don't need to tweak the transformation. Once you've created a Generic Log Adapter, you can use it from the command line to transform any file of the appropriate type, which allows you to embed this process into another application. The Log Trace Analyzer Once you have the logs in the appropriate format, the next step is to analyze them. You can make this process easier using the Log Trace Analyzer, which is set up as A quick tour of Autonomic Computing Page 20 of 27 © Copyright IBM Corporation 1994, 2006. All rights reserved.
  • 21. ibm.com/developerWorks developerWorks® part of the Generic Log Adapter installation. The LTA actually serves several purposes. In addition to actually analyzing individual log files, it enables you to correlate (bring two events together to create a situation) events both within and between logs, and to analyze logs on an ongoing basis by attaching to a Java Virtual Machine. It also enables you to plug in a custom "correlation engine" to determine the relationships between events. This can let you see how failures in one subsystem or service cascade over to create other situations, allowing you to identify the 'root cause' of an event. The LTA analyzes events by comparing them to a symptom database. A symptom database is a collection of potential problems, signs of those problems, and potential solutions. By comparing the events to the symptom database, the LTA can take a seemingly obscure event and tell you not only what it means, but in the case of an error, what to do about it. Using the Log Trace Analyzer In order to analyze a log (or logs) using the Log Trace Analyzer, you would take the following general steps: 1. Prepare a symptom database (or databases) by importing them and specifying them as "in use." A symptom database can be local or it can be on a remote server accessible by URL. 2. Import the log file itself. 3. If necessary, filter and/or sort the records to be analyzed. You can select or exclude specific fields of information such as the severity, or you can choose records based on specific criteria. For example, you might want to include only events that were part of a specific thread or date. 4. Analyze the log. This action compares each record to the symptom database, and if it finds a match, it highlights the record in blue. Clicking the record shows the information from the symptom database. 5. If necessary, correlate the log. You can correlate events within a single log, or you can correlate the events in a pair of logs. You can also combine multiple logs into a single log in order to correlate more than two logs. A quick tour of Autonomic Computing © Copyright IBM Corporation 1994, 2006. All rights reserved. Page 21 of 27
  • 22. developerWorks® ibm.com/developerWorks Correlating events Correlation of events can take place within logs or between logs. Using the LTA, you can view both Log interaction diagrams (within a single log) and Log thread interaction diagrams (between two logs): The LTA supports the following types of correlation "out of the box": • Correlation by Time • Correlation by URLs and Time • Correlation by Application IDs and Time • Correlation by URLs, Application IDs and Time • Correlation by PMI Request Metrics You can also create a custom correlation engine with your own specific requirements. A quick tour of Autonomic Computing Page 22 of 27 © Copyright IBM Corporation 1994, 2006. All rights reserved.
  • 23. ibm.com/developerWorks developerWorks® Section 6. The autonomic manager The Autonomic Management Engine (AME) Now that we've looked at a lot of the pieces that surround an autonomic solution, let's get to the heart of the matter. In order to make any autonomic solution work, you need managed resources, and in order to have managed resources, you need an autonomic manager. At the center of IBM's Autonomic Computing Toolkit is the Autonomic Management Engine (AME). AME provides you with the capability of handling all four aspects of the autonomic control loop. AME communicates with your application via Resource Models, as discussed in the next panel. Resource Models A Resource Model is, in many ways, like the universal adapter you can buy at the store when you lose the power supply for a piece of electronics. It has plugs of multiple sizes, and you can set the voltage and polarity to match the piece of equipment for which you want to provide power. A Resource Model serves the same purpose for "plugging in" an application and the Autonomic Management Engine. It comes down to this: AME needs to know how to access information from the application and how to provide information to the application. Is it described as common base events? Is it a proprietary log format? Should a particular process be monitored? A Resource Model defines all of that. The IBM Autonomic Computing Toolkit comes with a Resource Model that has been customized for use with Problem Determination Scenario. You can implement your own Resource Model by looking at the source code for the one that comes with the scenario. The Resource Model Builder The Resource Model Builder bundle, part of the IBM Autonomic Computing Toolkit, enables you to create a Resource Model for AME. A quick tour of Autonomic Computing © Copyright IBM Corporation 1994, 2006. All rights reserved. Page 23 of 27
  • 24. developerWorks® ibm.com/developerWorks The Resource Model includes information on the types of events to be monitored, where they come from, and thresholds at which actions will be triggered. For example, you might want to ignore an event unless it happens more than 5 times within an hour, or 5 polls in a row, and so on. Now let's look at some of the techniques used when coding for use by the AME. Interaction styles Over the course of any application's life, it will need to interact with resources in a variety of ways. In autonomic computing, we can sort these interactions into four classifications: • Sensor receive-state : In this case, the autonomic manager polls the entity from which it wants to obtain information. • Sensor receive-notification : In this case, the autonomic manager receives a message from the entity in question. • Effector perform-operation : In this case, the autonomic manager issues a command to another managed resource to change states or properties. • Effector call-out-request : In this case, the managed resource knows it's supposed to do something, but it doesn't know what, so it contacts the autonomic manager to find out. A quick tour of Autonomic Computing Page 24 of 27 © Copyright IBM Corporation 1994, 2006. All rights reserved.
  • 25. ibm.com/developerWorks developerWorks® The first release of the Autonomic Computing Toolkit includes classes illustrating how the Sensor receive-notification and Effector perform-operation interaction styles can be implemented in an application. Management topic implementation classes The current version of the IBM Autonomic Computing Toolkit includes the Management Topic Implementation classes, as part of the Problem Determination Scenario. They include com.ibm.autonomic.manager.AutonomicManagerTouchPointSupport and com.ibm.autonomic.resource.ManagedResourceTouchPoint, which both extend Java's UnicastRemoteObject class. The advantage of this is your ability to invoke these objects remotely via the RMI registry. (See the Resources for more information on Java RMI.) Actual coding is beyond the scope of this tutorial, but consider this example of binding a touchpoint into the local RMI registry. package com.ibm.autonomic.scenario.pd.manager; import com.ibm.autonomic.manager.IManagerTouchPoint; import com.ibm.autonomic.manager.ManagerProcessControl; public class PDManagerProcessControl extends ManagerProcessControl { public PDManagerProcessControl() { super(); } public void start() { boolean done = false; int count = 0; while ( !done ) { try { // create an instance of the problem determination // manager touchpoint...this extends the abstract // manager touch point class IManagerTouchPoint mgrTouchPoint = new PDManagerTouchPoint(); // publish the manager touch point and connect to the // resource...the method below is a convenience // method when only one resource exists...there is // also a method for multiple resources connectAndPublish(mgrTouchPoint, "//localhost/ManagerTouchPoint", "//localhost/ResourceTouchPoint"); done = true; System.out.println("Sucessfully connected to ResouceTouchPoint"); } catch (Throwable th) { try { Thread.sleep(5000); } catch ( Exception e) { e.printStackTrace(); } if (count++ > 20 ) { System.out.println( "Error connecting to managed resources:Tried 20 times"); th.printStackTrace(); done = false; } System.out.println("Retrying"); A quick tour of Autonomic Computing © Copyright IBM Corporation 1994, 2006. All rights reserved. Page 25 of 27
  • 26. developerWorks® ibm.com/developerWorks } } } public static void main(String[] args) throws Exception { // create an instance and start it up PDManagerProcessControl mpc = new PDManagerProcessControl(); mpc.start(); } } For more examples of how to use these classes, see the com.ibm.autonomic.scenario.pd.manager.PDAutonomicManagerTouchpointSupport and com.ibm.autonomic.scenario.pd.manager.PDManagedResourceTouchpoint classes in the Problem Determination Scenario. Section 7. Summary Summary The IBM Autonomic Computing Toolkit provides you with the tools and technologies you need to begin creating a solution that is self-healing, self-configuring, self-optimizing, and self-protecting. This tutorial provides an overview of the concepts integral to an understanding of how autonomic computing can be implemented, through autonomic control loops and autonomic tools. It also provides a treatment of the tools provided in the IBM Autonomic Computing Toolkit and how they relate to each other. These tools and technologies include: • Application install using a solution installation and deployment technologies • Administering and configuring applications using the Integrated Solutions Console • Common Base Events and the Generic Log Adapter • Symptom databases and the role of the Log Trace Analyzer • Autonomic management using the Autonomic Management Engine A quick tour of Autonomic Computing Page 26 of 27 © Copyright IBM Corporation 1994, 2006. All rights reserved.
  • 27. ibm.com/developerWorks developerWorks® Resources Learn • The developerWorks Autonomic Computing zone includes information for those who are new to autonomic computing, those who want to know more about the core technologies, and for those who want to know more about the Autonomic Computing Toolkit itself. • An autonomic computing roadmap gives you a firm grip of the concepts necessary to understand this tutorial. • Read more about Autonomic Computing Maturity Levels. • Learn about Common Base Events. • Learn more about Java Remote Method Invocation (Java RMI). Get products and technologies • You can also download the various Autonomic Toolkit Bundles. About the authors Daniel Worden Daniel Worden, a Studio B author, got his first root password in 1984. He has been installing, de-installing, configuring, testing, and breaking applications since that time as both a sys admin and manager of systems administration services. In 1996, he led a team that developed in-house a consolidated Web reporting tool for several dozen corporate databases. His Server Troubleshooting, Administration & Remote Support (STARS) team have been working with IBM as partners since 1998. His next book - Storage Networks From the Ground Up is available from Apress in April 2004. He can be reached at dworden@worden.net. Nicholas Chase Nicholas Chase, a Studio B author, has been involved in Web site development for companies such as Lucent Technologies, Sun Microsystems, Oracle, and the Tampa Bay Buccaneers. Nick has been a high school physics teacher, a low-level radioactive waste facility manager, an online science fiction magazine editor, a multimedia engineer, and an Oracle instructor. More recently, he was the Chief Technology Officer of an interactive communications company in Clearwater, Florida, and is the author of five books, including XML Primer Plus (Sams ). He loves to hear from readers and can be reached at nicholas@nicholaschase.com. A quick tour of Autonomic Computing © Copyright IBM Corporation 1994, 2006. All rights reserved. Page 27 of 27