HKG18-TR12 - LAVA for LITE Platforms and Tests

Linaro
LinaroLinaro
HKG18 TR12 - LAVA for LITE Platforms
and Tests
Bill Fletcher
Part1
● LAVA overview and test job basics
● Getting started with the Lab Instance
● Anatomy of a test job
● Looking at LAVA results
● Writing tests
Part2
● LAVA in ci.linaro.org overview
● Invoking LAVA via xmlrpc
● Metadata
● Job templates
Training Contents Summary
● Specifics
○ Material is specific to LITE
○ Emphasis is Zephyr targets rather than
Linux (i.e. monolithic images and no
shells)
● Out of scope
○ mcuboot*
○ Installing a local LAVA instance
○ Adding new devices
○ Adding new features
○ LAVA/Lab planning
*as far as I can tell mcuboot isn’t supported anywhere yet
● The Linaro Automated Validation Architecture
● An automation system for deploying executable images
onto physical and virtual hardware for running tests
● Very scalable
● More details at https://validation.linaro.org/static/docs/v2/
● LAVA Lab went live in July 2011 with 2(!) device types
● Features in the latest version:
○ YAML format job submissions
○ Live result reporting
○ A lot of support for scaled and/or distributed instances
LAVA Overview
Basic Elements of LAVA ● Web interface - UI based on the uWSGI application
server and the Django web framework. It also
provides XML-RPC access and the REST API.
● Database - PostgreSQL locally on the master
storing jobs and device details
● Scheduler - periodically this will scan the database
to check for queued test jobs and available test
devices, starting jobs on a Worker when the needed
resources become available.
● Lava-master daemon - This communicates with the
worker(s) using ZMQ.
● Lava-slave daemon - This receives control
messages from the master and sends logs and
results back to the master using ZMQ.
● Dispatcher - This manages all the operations on the
device under test, according to the job submission
and device parameters sent by the master.
● Device Under Test (DUT)
Dispatchers and Devices
● The picture on the left shows Hikey boards
in the Lab connected to one of the
Dispatchers
● The Dispatcher in this case provides:
○ USB ethernet - Networking
○ FTDI serial - console
○ USB OTG - interface for fastboot/flashing
○ Mode control (via OTG power or not)
○ Power control
● The Dispatcher needs to be able to:
○ Put the device in a known state
○ Deploy the test image to the device
○ Boot the device
○ Exactly monitor the execution of the test phase
○ Put the device back into a known state
LAVA Test Job Basics - a pipeline of Dispatcher
actions
deploy boot test
● Downloads files required by
the job to the dispatcher,
● to: parameter selects the
deployment strategy class
● Boot the device
● The device may be
powered up or reset to
provoke the boot.
● Every boot action must
specify a method: which is
used to determine how to
boot the deployed files on
the device.
● Individual action blocks can
be repeated conditionally or
unconditionally
● Groups of blocks (e.g. boot,
test) can also be repeated
● Other elements/modifiers
are: timeouts, protocols,
user notifications
● Execute the required tests
● Monitor the test excution
● Use naming and pattern
matching elements to parse
the specific test output
A Simplified Example Pipeline of Test Actions
1. deploy
1.1. strategy class
1.2. zephyr image url
2. boot
2.1. specify boot method (e.g. cmsis/pyocd)
3. test
3.1. monitor patterns
deploy boot test test
Repeat 3x
Retry on failure
A more complex job pipeline ...
bootdeploy test
Test Job Actions
● A test job reaches the LAVA Dispatcher as a pipeline of actions
● The action concept within a test job definition is tightly defined
○ there are 3 types of actions (deploy, boot, test)
○ actions don’t overlap (e.g. a test action doesn’t do any booting)
○ Repeating an action gives the same result (idempotency)
● The pipeline structure of each job is explicit - no implied actions or behaviour
● All pipeline steps are validated at submission, this includes checks on all urls
● Actions, and the elements that make them up, are documented here
https://validation.linaro.org/static/docs/v2/dispatcher-actions.html#dispatch
er-actions
● Link to Sample Job Files
job definition actions - k64f
# Zephyr JOB definition for NXP K64F
device_type: 'frdm-k64f'
job_name: 'zephyr tutorial 001 - from ppl.l.o'
[global timeouts, priority, context blocks omitted]
actions:
- deploy:
timeout:
minutes: 3
to: tmpfs
images:
zephyr:
url: 'https://people.linaro.org/~bill.fletcher/zephyr_frdm_k64f.bin'
- boot:
method: cmsis-dap
timeout:
minutes: 3
- test:
monitors:
- name: 'kernel_common'
start: (tc_start()|starting test)
end: PROJECT EXECUTION
pattern: (?P<result>(PASS|FAIL))s-s(?P<test_case_id>w+).
fixupdict:
PASS: pass
FAIL: fail
Getting Started with the Lab Instance
A quick tour of the web UI
https://validation.linaro.org/
“The LAVA Lab”
(Cambridge Lab - Production Instance)
Large installation with 7 workers
1.6M jobs
Reports 118 public devices
validation.linaro.org (LAVA Lab) - logging in
For access to the lab, mail a request to lava-lab-team@linaro.org
WebUI - Scheduler drop-down for general status
● Submit Job - can directly paste a
yaml job file here
● View All Jobs, or jobs in various
states
● All (Active) Devices
● Reports - Overall health check
statistics
● Workers - details of Dispatcher
instances
WebUI drop-down for job authentication tokens
All job submission requires authentication
● Create an authentication token:
https://validation.linaro.org/api/tokens/
● Display the token hash
Getting Support/Reporting Issues
LAVA Lab
Tech Lead: Dave Pigott
Support Portal
● "Problems" -> "Report a Problem".
Mention “LAVA Lab:" for correct
assignment
● Tickets should prominently feature
‘LITE’ in the subject and summary
● Generally please put as much info as
possible in the summary
● For e.g. VPN please include public keys
with the request
LAVA Project
Tech Lead: Neil Williams
Support Info
Mailing list: lava-users@lists.linaro.org
Bugs.linaro.org (->LAVA Framework)
lava-tool
● the command-line tool for interacting with the various services that LAVA
offers using the underlying XML-RPC mechanism
● can also be installed on any machine running a Debian-based distribution,
without needing the rest of LAVA ( $ apt-get install lava-tool )
● allows a user to interact with any LAVA instance on which they have an
account
● primarily designed to assist users in manual tasks and uses keyring integration
● Basic useful lava-tool features:
$ lava-tool auth-add <user@lava-server>
$ lava-tool submit-job <user@lava-server> <job definition file>
Using lava-tool to submit a Lava Job
● This example uses a prebuilt image and job definition file
● Use a test image built for a lab-supported platform - in this case frdm-k64f - at
https://people.linaro.org/~bill.fletcher/zephyr_frdm_k64f.bin
● Use the yaml job definition file here (complete version … and on next slide)
● Get an authentication token from the web UI and paste into lava-tool as prompted
$ lava-tool auth-add https://first.last@validation.linaro.org
● Submit the job
$ lava-tool submit-job https://first.last@validation.linaro.org zephyr_k64_job001.yaml
● lava-tool returns the job number if the submission is successful. You can follow the
results at https://validation.linaro.org/scheduler/myjobs, finding the job number.
# Zephyr JOB definition for NXP K64F
device_type: 'frdm-k64f'
job_name: 'zephyr tutorial hw test job submission 001 - from ppl.l.o'
timeouts:
job:
minutes: 6
action:
minutes: 2
priority: medium
visibility: public
actions:
- deploy:
timeout:
minutes: 3
to: tmpfs
images:
zephyr:
url: 'https://people.linaro.org/~bill.fletcher/zephyr_frdm_k64f.bin'
- boot:
method: cmsis-dap
timeout:
minutes: 3
- test:
monitors:
- name: 'kernel_common'
start: (tc_start()|starting test)
end: PROJECT EXECUTION
pattern: (?P<result>(PASS|FAIL))s-s(?P<test_case_id>w+).
fixupdict:
PASS: pass
FAIL: fail
Anatomy of a test job definition
● Documentation
https://validation.linaro.org/static/docs/v2/explain_first_jo
b.html
● Example job file k64f-kernel-common (previous slide)
● General details -
○ device_type - used by the Scheduler to match your job to a device
○ job_name - free text appearing in the list of jobs
○ Global timeouts - to detect and fail a hung job
● Context:
○ Used to set values for selected variables in the device configuration.
○ Most commonly, to tell the qemu template e.g. which architecture is
being tested
● Test Job actions: Deploy, Boot, Test
Deploy action
- deploy:
timeout:
minutes: 3
to: tmpfs
images:
zephyr:
image_arg: '-kernel {zephyr}'
url: 'https://...
Downloads files required by the job
to the dispatcher
detailed docs
timeout: self-explanatory - can use
seconds … to hours ...
to: specifies the deploy method
image_arg: only needed for jobs that
run on qemu Cortex M3
url: the location of the image
Many other deploy features not used here: OS
awareness, loading test overlays onto rootfs images
Boot action
- boot:
method: cmsis-dap
timeout:
minutes: 3
Boot the device
Detailed docs
timeout: self-explanatory
method: specifies either the command to run on
the dispatcher or the interaction with the bootloader on
the target
Zephyr specific boot methods:
● cmsis_dap.py
● pyocd.py
● qemu
No Parameters?
● The individual board is not known at job
submission time, so the Scheduler has to
populate the relevant ports, power-reset
control I/O etc
● Command line parameters for e.g. pyocd
are populated from the device_type
template in the Scheduler
Test action
- test:
monitors:
- name: 'kernel_common'
start: (tc_start()|starting test)
end: PROJECT EXECUTION
pattern:
(?P<result>(PASS|FAIL))s-s(?P<test_case_id>w+).
fixupdict:
PASS: pass
FAIL: fail
Execute the required tests
monitors: one-way DUT connection
-
https://git.linaro.org/lava/lava-dispatcher.git/tree/lava_disp
atcher/actions/test/monitor.py
name: appears in the results output
start: string used to detect when the
test action starts
end: string used to detect when the
test action is finished
pattern: supplies a parser that
converts each test output into results
fixupdict: as a default, LAVA
understands only
“pass”|”fail”|”skip”|”unknown”
Sample output to parse:
PASS - byteorder_test_memcpy_swap.
Looking at LAVA results
See what happens when we run the job …
● In the following slides:
● Results
● Job Details
● Timing
$ lava-tool submit-job returns the job number...
A link to the full trace is here:
https://validation.linaro.org/scheduler/job/1656241
Job Summary List
Results
Details
Results
Your Tests
LAVA’s checks
Job Details - start of Deploy action
Job Details - start of Boot action
Job Details - Test action parsing
- test:
monitors:
- name: 'kernel_common'
start: (tc_start()|starting test)
end: PROJECT EXECUTION
pattern:
(?P<result>(PASS|FAIL))s-s(?P<test_case_id>w+).
fixupdict:
PASS: pass
FAIL: fail
(not matched)
Job Timing - for timeout tuning, not
benchmarking
A More Complex Zephyr Test Example
Output the Zephyr boot time values as the result of a test, and also that the boot
test succeeded (tests/benchmarks/boot_time)
tc_start() - Boot Time Measurement
Boot Result: Clock Frequency: 12 MHz
__start : 0 cycles, 0 us
_start->main(): 5030 cycles, 419 us
_start->task : 5461 cycles, 455 us
_start->idle : 8934 cycles, 744 us
Boot Time Measurement finished
===================================================================
PASS - main.
===================================================================
PROJECT EXECUTION SUCCESSFUL
Pipeline: cascade 2 test actions
The first test action matches _start->... and picks out the microsecond values
The second test action matches PASS and picks out the test case which is main
Example Solution:
The Job Definition
The Results
The Measurements
Writing Tests
● pattern: expressions need to be compatible with pexpect/re (used by the
Dispatcher)
● monitor: is for devices without a unix-style* shell. It handles output only
● monitor: pattern matches can populate named Python regex groups for
test_case_id, result, measurement, units
● Obviously tests that need some interaction to boot and/or run can’t be
automated with LAVA
● The pattern: syntax has not been designed for complex detailed parsing of
test output logs. The expectation was that it would invoke (via a shell) and
parse the results of scripts/commands that would do most of the heavy lifting
in dealing with test suite output
*The Lava Test Shell is used for testing devices that have a unix style shell and a writeable FS.
Writing tests - coping strategies
● Most (non-Zephyr) LAVA users craft their test invocation scripts to fit existing
pattern: boilerplate
● Prototype pattern: re expressions in an offline python script before trying them
in LAVA
● Debug them further in LAVA test actions on an M3 qemu instance first (fast,
doesn’t tie up resources, unbreakable)
● The more carefully crafted a pattern: is, the more brittle it will likely be when
the Zephyr-side code changes
● Cascading multiple test action blocks can solve more complex parsing
problems
LAVA and CI
Overview
LAVA in ci.linaro.org
XMLRPC
Metadata
Job templates
Overview - industrializing LAVA
Health checks
Target requirements
Metadata
Health Checks & Gold Standard Images
● Health check
○ special type of test job
○ designed to validate a test device and the infrastructure around it
○ run periodically to check for equipment and/or infrastructure failures
○ needs to at least check that the device will boot and deploy a test image.
● Writing Health Checks
○ It has a job name describing the test as a health check
○ It has a minimal set of test definitions
○ It uses gold standard files
● Gold Standard
○ Gold standard has been defined in association with the QA team.
○ Provide a known baseline for test definition writers
○ (open point: are there gold standard images and jobs for LITE target boards?)
Sources of Target Board Success ...
● See https://validation.linaro.org/static/docs/v2/device-integration.html section
on Device Integration
A few LITE-relevant points:
● Serial
○ Persistent, stable
○ if over a shared OTG cable, other traffic does not disrupt trace
● Reset
○ Image data not retained
○ ‘old’ serial data not buffered/retained
● Predictable & repeatable
● No manual intervention
Metadata
● Linking a LAVA job and its result artifacts back to the code - not important for
ad hoc submission, but vital for CI
● Specific metadata: section within the jobfile
● Can be queried for a job via xmlrpc
● Example API call get_testjob_metadata (job_id)
● Call returns entries created by LAVA as well as submitted in the test job
definition
● Example
metadata:
build-url: $build_url
build-log: $build_url/consoleText
zephyr-gcc-variant: $gcc_variant
platform: $board_name
git-url: https://git.linaro.org/zephyrproject-org/zephyr.git
git-commit: $git_commit
LAVA in ci.linaro.org
Jenkins
ci.linaro.org
LAVA Instance
validation.linaro.org
job
file
Test Farm
Deploy
Boot,
Test
Output
Results?
Idealised flow: ● In practice, LAVA jobs are
submitted by the QA server which
acts as a proxy, not by ci.linaro.orglinaro-cp
submit-to-lava
Jenkins
ci.linaro.org
LAVA Instance
validation.linaro.org
Test Farm
Boot,
Test
Output
Results
QA Server
submit-for-qa
● In either case LAVA is invoked via
xmlrpc APIs
metadata?
Invoking a LAVA job via xmlrpc
#!/usr/lib/python
import xmlrpclib
username = "bill.fletcher"
token = "<token string>"
hostname = "validation.linaro.org"
server = xmlrpclib.ServerProxy("https://%s:%s@%s/RPC2" % (username, token, hostname))
jobfile = open("zephyr_k64_job001.yaml")
jobtext = jobfile.read()
id = server.scheduler.submit_job(jobtext)
print server.scheduler.job_status(id)
The above is approximately equivalent to $ lava-tool submit-job ...
The API is documented here https://validation.linaro.org/api/help/
Creating the jobfile on the fly - templates
Uses class string.Template(template)
template_file_name = "lava-job-definitions/%s/template.yaml" % (args.device_type, )
test_template = None
if os.path.exists(template_file_name):
test_template_file = open(template_file_name, "r")
test_template = test_template_file.read()
test_template_file.close()
else:
sys.exit(1)
replace_dict = dict(
build_url=args.build_url,
test_url=args.test_url,
device_type=args.device_type,
board_name=args.board_name,
test_name=args.test_name,
git_commit=args.git_commit,
gcc_variant=args.gcc_variant
)
template = Template(test_template)
lava_job = template.substitute(replace_dict)
Job Templates - actions
actions:
- deploy:
timeout:
minutes: 3
to: tmpfs
images:
zephyr:
url: ' $test_url '
- boot:
method: pyocd
timeout:
minutes: 10
- test:
timeout:
minutes: 10
monitors:
- name: ' $test_name '
start: (tc_start()|starting .*test|BOOTING ZEPHYR OS)
end: PROJECT EXECUTION
pattern: (?P<result>(PASS|FAIL))s-s(?P<test_case_id>w+).
fixupdict:
PASS: pass
FAIL: fail
Maybe consider also including
pattern: in the template, so that it
tracks any changes in the test
Job Templates - general, timeouts & metadata
# Zephyr JOB definition for frdm-kw41z
device_type: ' $device_type '
job_name: 'zephyr-upstream $test_name '
timeouts:
job:
minutes: 30
action:
minutes: 3
actions:
wait-usb-device:
seconds: 40
priority: medium
visibility: public
<actions>
metadata:
build-url: $build_url
build-log: $build_url /consoleText
zephyr-gcc-variant: $gcc_variant
platform: $board_name
git-url: https://git.linaro.org/zephyrproject-org/zephyr.git
git-commit: $git_commit
Thank You
#HKG18
HKG18 keynotes and videos on: connect.linaro.org
For further information: www.linaro.org
1 sur 42

Recommandé

LCA13: LAVA and CI Component Review par
LCA13: LAVA and CI Component ReviewLCA13: LAVA and CI Component Review
LCA13: LAVA and CI Component ReviewLinaro
880 vues16 diapositives
LCE13: LAVA Multi-Node Testing par
LCE13: LAVA Multi-Node TestingLCE13: LAVA Multi-Node Testing
LCE13: LAVA Multi-Node TestingLinaro
3.4K vues31 diapositives
Demo par
DemoDemo
Demosean chen
2.3K vues6 diapositives
Introduction to LAVA Workload Scheduler par
Introduction to LAVA Workload SchedulerIntroduction to LAVA Workload Scheduler
Introduction to LAVA Workload SchedulerNopparat Nopkuat
2.1K vues53 diapositives
Chronon - A Back-In-Time-Debugger for Java par
Chronon - A Back-In-Time-Debugger for JavaChronon - A Back-In-Time-Debugger for Java
Chronon - A Back-In-Time-Debugger for Javatsauerwein
2.7K vues30 diapositives
Uvm presentation dac2011_final par
Uvm presentation dac2011_finalUvm presentation dac2011_final
Uvm presentation dac2011_finalsean chen
7.5K vues105 diapositives

Contenu connexe

Tendances

Java Hates Linux. Deal With It. par
Java Hates Linux.  Deal With It.Java Hates Linux.  Deal With It.
Java Hates Linux. Deal With It.Greg Banks
1.7K vues71 diapositives
TIP1 - Overview of C/C++ Debugging/Tracing/Profiling Tools par
TIP1 - Overview of C/C++ Debugging/Tracing/Profiling ToolsTIP1 - Overview of C/C++ Debugging/Tracing/Profiling Tools
TIP1 - Overview of C/C++ Debugging/Tracing/Profiling ToolsXiaozhe Wang
11.4K vues44 diapositives
CppUnit using introduction par
CppUnit using introductionCppUnit using introduction
CppUnit using introductionIurii Kyian
2K vues13 diapositives
LAS16-307: Benchmarking Schedutil in Android par
LAS16-307: Benchmarking Schedutil in AndroidLAS16-307: Benchmarking Schedutil in Android
LAS16-307: Benchmarking Schedutil in AndroidLinaro
2.6K vues46 diapositives
How To Get The Most Out Of Your Hibernate, JBoss EAP 7 Application (Ståle Ped... par
How To Get The Most Out Of Your Hibernate, JBoss EAP 7 Application (Ståle Ped...How To Get The Most Out Of Your Hibernate, JBoss EAP 7 Application (Ståle Ped...
How To Get The Most Out Of Your Hibernate, JBoss EAP 7 Application (Ståle Ped...Red Hat Developers
2.1K vues96 diapositives
HP Quick Test Professional par
HP Quick Test ProfessionalHP Quick Test Professional
HP Quick Test ProfessionalVitaliy Ganzha
1.5K vues28 diapositives

Tendances(19)

Java Hates Linux. Deal With It. par Greg Banks
Java Hates Linux.  Deal With It.Java Hates Linux.  Deal With It.
Java Hates Linux. Deal With It.
Greg Banks1.7K vues
TIP1 - Overview of C/C++ Debugging/Tracing/Profiling Tools par Xiaozhe Wang
TIP1 - Overview of C/C++ Debugging/Tracing/Profiling ToolsTIP1 - Overview of C/C++ Debugging/Tracing/Profiling Tools
TIP1 - Overview of C/C++ Debugging/Tracing/Profiling Tools
Xiaozhe Wang11.4K vues
CppUnit using introduction par Iurii Kyian
CppUnit using introductionCppUnit using introduction
CppUnit using introduction
Iurii Kyian2K vues
LAS16-307: Benchmarking Schedutil in Android par Linaro
LAS16-307: Benchmarking Schedutil in AndroidLAS16-307: Benchmarking Schedutil in Android
LAS16-307: Benchmarking Schedutil in Android
Linaro2.6K vues
How To Get The Most Out Of Your Hibernate, JBoss EAP 7 Application (Ståle Ped... par Red Hat Developers
How To Get The Most Out Of Your Hibernate, JBoss EAP 7 Application (Ståle Ped...How To Get The Most Out Of Your Hibernate, JBoss EAP 7 Application (Ståle Ped...
How To Get The Most Out Of Your Hibernate, JBoss EAP 7 Application (Ståle Ped...
Red Hat Developers2.1K vues
Top five reasons why every DV engineer will love the latest systemverilog 201... par Srinivasan Venkataramanan
Top five reasons why every DV engineer will love the latest systemverilog 201...Top five reasons why every DV engineer will love the latest systemverilog 201...
Top five reasons why every DV engineer will love the latest systemverilog 201...
DTrace Topics: Introduction par Brendan Gregg
DTrace Topics: IntroductionDTrace Topics: Introduction
DTrace Topics: Introduction
Brendan Gregg3.4K vues
"Introduction to JMeter" @ CPTM 3rd Session par Tharinda Liyanage
"Introduction to JMeter" @ CPTM 3rd Session"Introduction to JMeter" @ CPTM 3rd Session
"Introduction to JMeter" @ CPTM 3rd Session
Tharinda Liyanage10.2K vues
Software Profiling: Java Performance, Profiling and Flamegraphs par Isuru Perera
Software Profiling: Java Performance, Profiling and FlamegraphsSoftware Profiling: Java Performance, Profiling and Flamegraphs
Software Profiling: Java Performance, Profiling and Flamegraphs
Isuru Perera750 vues
Analyzing Java Applications Using Thermostat (Omair Majid) par Red Hat Developers
Analyzing Java Applications Using Thermostat (Omair Majid)Analyzing Java Applications Using Thermostat (Omair Majid)
Analyzing Java Applications Using Thermostat (Omair Majid)
Red Hat Developers1.7K vues
SystemVerilog Assertions verification with SVAUnit - DVCon US 2016 Tutorial par Amiq Consulting
SystemVerilog Assertions verification with SVAUnit - DVCon US 2016 TutorialSystemVerilog Assertions verification with SVAUnit - DVCon US 2016 Tutorial
SystemVerilog Assertions verification with SVAUnit - DVCon US 2016 Tutorial
Amiq Consulting2.4K vues
Brief introduction to kselftest par SeongJae Park
Brief introduction to kselftestBrief introduction to kselftest
Brief introduction to kselftest
SeongJae Park1.1K vues
UVM Driver sequencer handshaking par HARINATH REDDY
UVM Driver sequencer handshakingUVM Driver sequencer handshaking
UVM Driver sequencer handshaking
HARINATH REDDY1.2K vues
A New Tracer for Reverse Engineering - PacSec 2010 par Tsukasa Oi
A New Tracer for Reverse Engineering - PacSec 2010A New Tracer for Reverse Engineering - PacSec 2010
A New Tracer for Reverse Engineering - PacSec 2010
Tsukasa Oi604 vues
Ecet 360 Enthusiastic Study / snaptutorial.com par Stephenson34
Ecet 360 Enthusiastic Study / snaptutorial.comEcet 360 Enthusiastic Study / snaptutorial.com
Ecet 360 Enthusiastic Study / snaptutorial.com
Stephenson3420 vues

Similaire à HKG18-TR12 - LAVA for LITE Platforms and Tests

LCE13: Test and Validation Mini-Summit: Review Current Linaro Engineering Pro... par
LCE13: Test and Validation Mini-Summit: Review Current Linaro Engineering Pro...LCE13: Test and Validation Mini-Summit: Review Current Linaro Engineering Pro...
LCE13: Test and Validation Mini-Summit: Review Current Linaro Engineering Pro...Linaro
3.3K vues37 diapositives
LCE13: Test and Validation Summit: The future of testing at Linaro par
LCE13: Test and Validation Summit: The future of testing at LinaroLCE13: Test and Validation Summit: The future of testing at Linaro
LCE13: Test and Validation Summit: The future of testing at LinaroLinaro
3.3K vues37 diapositives
PaaSTA: Autoscaling at Yelp par
PaaSTA: Autoscaling at YelpPaaSTA: Autoscaling at Yelp
PaaSTA: Autoscaling at YelpNathan Handler
872 vues54 diapositives
PaaSTA: Running applications at Yelp par
PaaSTA: Running applications at YelpPaaSTA: Running applications at Yelp
PaaSTA: Running applications at YelpNathan Handler
1.1K vues46 diapositives
LCE13: Test and Validation Summit: Evolution of Testing in Linaro (II) par
LCE13: Test and Validation Summit: Evolution of Testing in Linaro (II)LCE13: Test and Validation Summit: Evolution of Testing in Linaro (II)
LCE13: Test and Validation Summit: Evolution of Testing in Linaro (II)Linaro
3K vues48 diapositives
LCE13: Test and Validation Summit: Evolution of Testing in Linaro (I) par
LCE13: Test and Validation Summit: Evolution of Testing in Linaro (I)LCE13: Test and Validation Summit: Evolution of Testing in Linaro (I)
LCE13: Test and Validation Summit: Evolution of Testing in Linaro (I)Linaro
3.2K vues48 diapositives

Similaire à HKG18-TR12 - LAVA for LITE Platforms and Tests(20)

LCE13: Test and Validation Mini-Summit: Review Current Linaro Engineering Pro... par Linaro
LCE13: Test and Validation Mini-Summit: Review Current Linaro Engineering Pro...LCE13: Test and Validation Mini-Summit: Review Current Linaro Engineering Pro...
LCE13: Test and Validation Mini-Summit: Review Current Linaro Engineering Pro...
Linaro3.3K vues
LCE13: Test and Validation Summit: The future of testing at Linaro par Linaro
LCE13: Test and Validation Summit: The future of testing at LinaroLCE13: Test and Validation Summit: The future of testing at Linaro
LCE13: Test and Validation Summit: The future of testing at Linaro
Linaro3.3K vues
PaaSTA: Running applications at Yelp par Nathan Handler
PaaSTA: Running applications at YelpPaaSTA: Running applications at Yelp
PaaSTA: Running applications at Yelp
Nathan Handler1.1K vues
LCE13: Test and Validation Summit: Evolution of Testing in Linaro (II) par Linaro
LCE13: Test and Validation Summit: Evolution of Testing in Linaro (II)LCE13: Test and Validation Summit: Evolution of Testing in Linaro (II)
LCE13: Test and Validation Summit: Evolution of Testing in Linaro (II)
Linaro3K vues
LCE13: Test and Validation Summit: Evolution of Testing in Linaro (I) par Linaro
LCE13: Test and Validation Summit: Evolution of Testing in Linaro (I)LCE13: Test and Validation Summit: Evolution of Testing in Linaro (I)
LCE13: Test and Validation Summit: Evolution of Testing in Linaro (I)
Linaro3.2K vues
BKK16-312 Integrating and controlling embedded devices in LAVA par Linaro
BKK16-312 Integrating and controlling embedded devices in LAVABKK16-312 Integrating and controlling embedded devices in LAVA
BKK16-312 Integrating and controlling embedded devices in LAVA
Linaro837 vues
LISA15: systemd, the Next-Generation Linux System Manager par Alison Chaiken
LISA15: systemd, the Next-Generation Linux System Manager LISA15: systemd, the Next-Generation Linux System Manager
LISA15: systemd, the Next-Generation Linux System Manager
Alison Chaiken1.1K vues
Load testing in Zonky with Gatling par Petr Vlček
Load testing in Zonky with GatlingLoad testing in Zonky with Gatling
Load testing in Zonky with Gatling
Petr Vlček440 vues
Hadoop cluster performance profiler par Ihor Bobak
Hadoop cluster performance profilerHadoop cluster performance profiler
Hadoop cluster performance profiler
Ihor Bobak74 vues
HKG15-204: OpenStack: 3rd party testing and performance benchmarking par Linaro
HKG15-204: OpenStack: 3rd party testing and performance benchmarkingHKG15-204: OpenStack: 3rd party testing and performance benchmarking
HKG15-204: OpenStack: 3rd party testing and performance benchmarking
Linaro3.8K vues
The State of the Veil Framework par VeilFramework
The State of the Veil FrameworkThe State of the Veil Framework
The State of the Veil Framework
VeilFramework4.8K vues
BKK16-210 Migrating to the new dispatcher par Linaro
BKK16-210 Migrating to the new dispatcherBKK16-210 Migrating to the new dispatcher
BKK16-210 Migrating to the new dispatcher
Linaro372 vues
Testing Django APIs par tyomo4ka
Testing Django APIsTesting Django APIs
Testing Django APIs
tyomo4ka60 vues

Plus de Linaro

Deep Learning Neural Network Acceleration at the Edge - Andrea Gallo par
Deep Learning Neural Network Acceleration at the Edge - Andrea GalloDeep Learning Neural Network Acceleration at the Edge - Andrea Gallo
Deep Learning Neural Network Acceleration at the Edge - Andrea GalloLinaro
7K vues54 diapositives
Arm Architecture HPC Workshop Santa Clara 2018 - Kanta Vekaria par
Arm Architecture HPC Workshop Santa Clara 2018 - Kanta VekariaArm Architecture HPC Workshop Santa Clara 2018 - Kanta Vekaria
Arm Architecture HPC Workshop Santa Clara 2018 - Kanta VekariaLinaro
3K vues8 diapositives
Huawei’s requirements for the ARM based HPC solution readiness - Joshua Mora par
Huawei’s requirements for the ARM based HPC solution readiness - Joshua MoraHuawei’s requirements for the ARM based HPC solution readiness - Joshua Mora
Huawei’s requirements for the ARM based HPC solution readiness - Joshua MoraLinaro
3.7K vues20 diapositives
Bud17 113: distribution ci using qemu and open qa par
Bud17 113: distribution ci using qemu and open qaBud17 113: distribution ci using qemu and open qa
Bud17 113: distribution ci using qemu and open qaLinaro
662 vues63 diapositives
OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018 par
OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018
OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018Linaro
2.3K vues16 diapositives
HPC network stack on ARM - Linaro HPC Workshop 2018 par
HPC network stack on ARM - Linaro HPC Workshop 2018HPC network stack on ARM - Linaro HPC Workshop 2018
HPC network stack on ARM - Linaro HPC Workshop 2018Linaro
2.6K vues21 diapositives

Plus de Linaro(20)

Deep Learning Neural Network Acceleration at the Edge - Andrea Gallo par Linaro
Deep Learning Neural Network Acceleration at the Edge - Andrea GalloDeep Learning Neural Network Acceleration at the Edge - Andrea Gallo
Deep Learning Neural Network Acceleration at the Edge - Andrea Gallo
Linaro7K vues
Arm Architecture HPC Workshop Santa Clara 2018 - Kanta Vekaria par Linaro
Arm Architecture HPC Workshop Santa Clara 2018 - Kanta VekariaArm Architecture HPC Workshop Santa Clara 2018 - Kanta Vekaria
Arm Architecture HPC Workshop Santa Clara 2018 - Kanta Vekaria
Linaro3K vues
Huawei’s requirements for the ARM based HPC solution readiness - Joshua Mora par Linaro
Huawei’s requirements for the ARM based HPC solution readiness - Joshua MoraHuawei’s requirements for the ARM based HPC solution readiness - Joshua Mora
Huawei’s requirements for the ARM based HPC solution readiness - Joshua Mora
Linaro3.7K vues
Bud17 113: distribution ci using qemu and open qa par Linaro
Bud17 113: distribution ci using qemu and open qaBud17 113: distribution ci using qemu and open qa
Bud17 113: distribution ci using qemu and open qa
Linaro662 vues
OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018 par Linaro
OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018
OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018
Linaro2.3K vues
HPC network stack on ARM - Linaro HPC Workshop 2018 par Linaro
HPC network stack on ARM - Linaro HPC Workshop 2018HPC network stack on ARM - Linaro HPC Workshop 2018
HPC network stack on ARM - Linaro HPC Workshop 2018
Linaro2.6K vues
It just keeps getting better - SUSE enablement for Arm - Linaro HPC Workshop ... par Linaro
It just keeps getting better - SUSE enablement for Arm - Linaro HPC Workshop ...It just keeps getting better - SUSE enablement for Arm - Linaro HPC Workshop ...
It just keeps getting better - SUSE enablement for Arm - Linaro HPC Workshop ...
Linaro1.9K vues
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ... par Linaro
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
Linaro2.4K vues
Yutaka Ishikawa - Post-K and Arm HPC Ecosystem - Linaro Arm HPC Workshop Sant... par Linaro
Yutaka Ishikawa - Post-K and Arm HPC Ecosystem - Linaro Arm HPC Workshop Sant...Yutaka Ishikawa - Post-K and Arm HPC Ecosystem - Linaro Arm HPC Workshop Sant...
Yutaka Ishikawa - Post-K and Arm HPC Ecosystem - Linaro Arm HPC Workshop Sant...
Linaro2.4K vues
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su... par Linaro
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
Linaro2.7K vues
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline par Linaro
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainlineHKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
Linaro4.4K vues
HKG18-100K1 - George Grey: Opening Keynote par Linaro
HKG18-100K1 - George Grey: Opening KeynoteHKG18-100K1 - George Grey: Opening Keynote
HKG18-100K1 - George Grey: Opening Keynote
Linaro839 vues
HKG18-318 - OpenAMP Workshop par Linaro
HKG18-318 - OpenAMP WorkshopHKG18-318 - OpenAMP Workshop
HKG18-318 - OpenAMP Workshop
Linaro608 vues
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline par Linaro
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainlineHKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
Linaro866 vues
HKG18-315 - Why the ecosystem is a wonderful thing, warts and all par Linaro
HKG18-315 - Why the ecosystem is a wonderful thing, warts and allHKG18-315 - Why the ecosystem is a wonderful thing, warts and all
HKG18-315 - Why the ecosystem is a wonderful thing, warts and all
Linaro219 vues
HKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor par Linaro
HKG18- 115 - Partitioning ARM Systems with the Jailhouse HypervisorHKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
HKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
Linaro1.5K vues
HKG18-TR08 - Upstreaming SVE in QEMU par Linaro
HKG18-TR08 - Upstreaming SVE in QEMUHKG18-TR08 - Upstreaming SVE in QEMU
HKG18-TR08 - Upstreaming SVE in QEMU
Linaro476 vues
HKG18-113- Secure Data Path work with i.MX8M par Linaro
HKG18-113- Secure Data Path work with i.MX8MHKG18-113- Secure Data Path work with i.MX8M
HKG18-113- Secure Data Path work with i.MX8M
Linaro1.4K vues
HKG18-120 - Devicetree Schema Documentation and Validation par Linaro
HKG18-120 - Devicetree Schema Documentation and Validation HKG18-120 - Devicetree Schema Documentation and Validation
HKG18-120 - Devicetree Schema Documentation and Validation
Linaro1.3K vues
HKG18-223 - Trusted FirmwareM: Trusted boot par Linaro
HKG18-223 - Trusted FirmwareM: Trusted bootHKG18-223 - Trusted FirmwareM: Trusted boot
HKG18-223 - Trusted FirmwareM: Trusted boot
Linaro1.4K vues

Dernier

VNF Integration and Support in CloudStack - Wei Zhou - ShapeBlue par
VNF Integration and Support in CloudStack - Wei Zhou - ShapeBlueVNF Integration and Support in CloudStack - Wei Zhou - ShapeBlue
VNF Integration and Support in CloudStack - Wei Zhou - ShapeBlueShapeBlue
163 vues54 diapositives
Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha... par
Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha...Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha...
Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha...ShapeBlue
138 vues18 diapositives
Automating a World-Class Technology Conference; Behind the Scenes of CiscoLive par
Automating a World-Class Technology Conference; Behind the Scenes of CiscoLiveAutomating a World-Class Technology Conference; Behind the Scenes of CiscoLive
Automating a World-Class Technology Conference; Behind the Scenes of CiscoLiveNetwork Automation Forum
50 vues35 diapositives
The Role of Patterns in the Era of Large Language Models par
The Role of Patterns in the Era of Large Language ModelsThe Role of Patterns in the Era of Large Language Models
The Role of Patterns in the Era of Large Language ModelsYunyao Li
80 vues65 diapositives
Keynote Talk: Open Source is Not Dead - Charles Schulz - Vates par
Keynote Talk: Open Source is Not Dead - Charles Schulz - VatesKeynote Talk: Open Source is Not Dead - Charles Schulz - Vates
Keynote Talk: Open Source is Not Dead - Charles Schulz - VatesShapeBlue
210 vues15 diapositives
Extending KVM Host HA for Non-NFS Storage - Alex Ivanov - StorPool par
Extending KVM Host HA for Non-NFS Storage -  Alex Ivanov - StorPoolExtending KVM Host HA for Non-NFS Storage -  Alex Ivanov - StorPool
Extending KVM Host HA for Non-NFS Storage - Alex Ivanov - StorPoolShapeBlue
84 vues10 diapositives

Dernier(20)

VNF Integration and Support in CloudStack - Wei Zhou - ShapeBlue par ShapeBlue
VNF Integration and Support in CloudStack - Wei Zhou - ShapeBlueVNF Integration and Support in CloudStack - Wei Zhou - ShapeBlue
VNF Integration and Support in CloudStack - Wei Zhou - ShapeBlue
ShapeBlue163 vues
Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha... par ShapeBlue
Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha...Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha...
Mitigating Common CloudStack Instance Deployment Failures - Jithin Raju - Sha...
ShapeBlue138 vues
Automating a World-Class Technology Conference; Behind the Scenes of CiscoLive par Network Automation Forum
Automating a World-Class Technology Conference; Behind the Scenes of CiscoLiveAutomating a World-Class Technology Conference; Behind the Scenes of CiscoLive
Automating a World-Class Technology Conference; Behind the Scenes of CiscoLive
The Role of Patterns in the Era of Large Language Models par Yunyao Li
The Role of Patterns in the Era of Large Language ModelsThe Role of Patterns in the Era of Large Language Models
The Role of Patterns in the Era of Large Language Models
Yunyao Li80 vues
Keynote Talk: Open Source is Not Dead - Charles Schulz - Vates par ShapeBlue
Keynote Talk: Open Source is Not Dead - Charles Schulz - VatesKeynote Talk: Open Source is Not Dead - Charles Schulz - Vates
Keynote Talk: Open Source is Not Dead - Charles Schulz - Vates
ShapeBlue210 vues
Extending KVM Host HA for Non-NFS Storage - Alex Ivanov - StorPool par ShapeBlue
Extending KVM Host HA for Non-NFS Storage -  Alex Ivanov - StorPoolExtending KVM Host HA for Non-NFS Storage -  Alex Ivanov - StorPool
Extending KVM Host HA for Non-NFS Storage - Alex Ivanov - StorPool
ShapeBlue84 vues
iSAQB Software Architecture Gathering 2023: How Process Orchestration Increas... par Bernd Ruecker
iSAQB Software Architecture Gathering 2023: How Process Orchestration Increas...iSAQB Software Architecture Gathering 2023: How Process Orchestration Increas...
iSAQB Software Architecture Gathering 2023: How Process Orchestration Increas...
Bernd Ruecker50 vues
Import Export Virtual Machine for KVM Hypervisor - Ayush Pandey - University ... par ShapeBlue
Import Export Virtual Machine for KVM Hypervisor - Ayush Pandey - University ...Import Export Virtual Machine for KVM Hypervisor - Ayush Pandey - University ...
Import Export Virtual Machine for KVM Hypervisor - Ayush Pandey - University ...
ShapeBlue79 vues
Live Demo Showcase: Unveiling Dell PowerFlex’s IaaS Capabilities with Apache ... par ShapeBlue
Live Demo Showcase: Unveiling Dell PowerFlex’s IaaS Capabilities with Apache ...Live Demo Showcase: Unveiling Dell PowerFlex’s IaaS Capabilities with Apache ...
Live Demo Showcase: Unveiling Dell PowerFlex’s IaaS Capabilities with Apache ...
ShapeBlue85 vues
CloudStack Object Storage - An Introduction - Vladimir Petrov - ShapeBlue par ShapeBlue
CloudStack Object Storage - An Introduction - Vladimir Petrov - ShapeBlueCloudStack Object Storage - An Introduction - Vladimir Petrov - ShapeBlue
CloudStack Object Storage - An Introduction - Vladimir Petrov - ShapeBlue
ShapeBlue93 vues
Backroll, News and Demo - Pierre Charton, Matthias Dhellin, Ousmane Diarra - ... par ShapeBlue
Backroll, News and Demo - Pierre Charton, Matthias Dhellin, Ousmane Diarra - ...Backroll, News and Demo - Pierre Charton, Matthias Dhellin, Ousmane Diarra - ...
Backroll, News and Demo - Pierre Charton, Matthias Dhellin, Ousmane Diarra - ...
ShapeBlue146 vues
Centralized Logging Feature in CloudStack using ELK and Grafana - Kiran Chava... par ShapeBlue
Centralized Logging Feature in CloudStack using ELK and Grafana - Kiran Chava...Centralized Logging Feature in CloudStack using ELK and Grafana - Kiran Chava...
Centralized Logging Feature in CloudStack using ELK and Grafana - Kiran Chava...
ShapeBlue101 vues
Data Integrity for Banking and Financial Services par Precisely
Data Integrity for Banking and Financial ServicesData Integrity for Banking and Financial Services
Data Integrity for Banking and Financial Services
Precisely78 vues
2FA and OAuth2 in CloudStack - Andrija Panić - ShapeBlue par ShapeBlue
2FA and OAuth2 in CloudStack - Andrija Panić - ShapeBlue2FA and OAuth2 in CloudStack - Andrija Panić - ShapeBlue
2FA and OAuth2 in CloudStack - Andrija Panić - ShapeBlue
ShapeBlue103 vues
Migrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlue par ShapeBlue
Migrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlueMigrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlue
Migrating VMware Infra to KVM Using CloudStack - Nicolas Vazquez - ShapeBlue
ShapeBlue176 vues
Updates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBIT par ShapeBlue
Updates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBITUpdates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBIT
Updates on the LINSTOR Driver for CloudStack - Rene Peinthor - LINBIT
ShapeBlue166 vues
Zero to Cloud Hero: Crafting a Private Cloud from Scratch with XCP-ng, Xen Or... par ShapeBlue
Zero to Cloud Hero: Crafting a Private Cloud from Scratch with XCP-ng, Xen Or...Zero to Cloud Hero: Crafting a Private Cloud from Scratch with XCP-ng, Xen Or...
Zero to Cloud Hero: Crafting a Private Cloud from Scratch with XCP-ng, Xen Or...
ShapeBlue158 vues
How to Re-use Old Hardware with CloudStack. Saving Money and the Environment ... par ShapeBlue
How to Re-use Old Hardware with CloudStack. Saving Money and the Environment ...How to Re-use Old Hardware with CloudStack. Saving Money and the Environment ...
How to Re-use Old Hardware with CloudStack. Saving Money and the Environment ...
ShapeBlue123 vues

HKG18-TR12 - LAVA for LITE Platforms and Tests

  • 1. HKG18 TR12 - LAVA for LITE Platforms and Tests Bill Fletcher
  • 2. Part1 ● LAVA overview and test job basics ● Getting started with the Lab Instance ● Anatomy of a test job ● Looking at LAVA results ● Writing tests Part2 ● LAVA in ci.linaro.org overview ● Invoking LAVA via xmlrpc ● Metadata ● Job templates Training Contents Summary ● Specifics ○ Material is specific to LITE ○ Emphasis is Zephyr targets rather than Linux (i.e. monolithic images and no shells) ● Out of scope ○ mcuboot* ○ Installing a local LAVA instance ○ Adding new devices ○ Adding new features ○ LAVA/Lab planning *as far as I can tell mcuboot isn’t supported anywhere yet
  • 3. ● The Linaro Automated Validation Architecture ● An automation system for deploying executable images onto physical and virtual hardware for running tests ● Very scalable ● More details at https://validation.linaro.org/static/docs/v2/ ● LAVA Lab went live in July 2011 with 2(!) device types ● Features in the latest version: ○ YAML format job submissions ○ Live result reporting ○ A lot of support for scaled and/or distributed instances LAVA Overview
  • 4. Basic Elements of LAVA ● Web interface - UI based on the uWSGI application server and the Django web framework. It also provides XML-RPC access and the REST API. ● Database - PostgreSQL locally on the master storing jobs and device details ● Scheduler - periodically this will scan the database to check for queued test jobs and available test devices, starting jobs on a Worker when the needed resources become available. ● Lava-master daemon - This communicates with the worker(s) using ZMQ. ● Lava-slave daemon - This receives control messages from the master and sends logs and results back to the master using ZMQ. ● Dispatcher - This manages all the operations on the device under test, according to the job submission and device parameters sent by the master. ● Device Under Test (DUT)
  • 5. Dispatchers and Devices ● The picture on the left shows Hikey boards in the Lab connected to one of the Dispatchers ● The Dispatcher in this case provides: ○ USB ethernet - Networking ○ FTDI serial - console ○ USB OTG - interface for fastboot/flashing ○ Mode control (via OTG power or not) ○ Power control ● The Dispatcher needs to be able to: ○ Put the device in a known state ○ Deploy the test image to the device ○ Boot the device ○ Exactly monitor the execution of the test phase ○ Put the device back into a known state
  • 6. LAVA Test Job Basics - a pipeline of Dispatcher actions deploy boot test ● Downloads files required by the job to the dispatcher, ● to: parameter selects the deployment strategy class ● Boot the device ● The device may be powered up or reset to provoke the boot. ● Every boot action must specify a method: which is used to determine how to boot the deployed files on the device. ● Individual action blocks can be repeated conditionally or unconditionally ● Groups of blocks (e.g. boot, test) can also be repeated ● Other elements/modifiers are: timeouts, protocols, user notifications ● Execute the required tests ● Monitor the test excution ● Use naming and pattern matching elements to parse the specific test output
  • 7. A Simplified Example Pipeline of Test Actions 1. deploy 1.1. strategy class 1.2. zephyr image url 2. boot 2.1. specify boot method (e.g. cmsis/pyocd) 3. test 3.1. monitor patterns deploy boot test test Repeat 3x Retry on failure A more complex job pipeline ... bootdeploy test
  • 8. Test Job Actions ● A test job reaches the LAVA Dispatcher as a pipeline of actions ● The action concept within a test job definition is tightly defined ○ there are 3 types of actions (deploy, boot, test) ○ actions don’t overlap (e.g. a test action doesn’t do any booting) ○ Repeating an action gives the same result (idempotency) ● The pipeline structure of each job is explicit - no implied actions or behaviour ● All pipeline steps are validated at submission, this includes checks on all urls ● Actions, and the elements that make them up, are documented here https://validation.linaro.org/static/docs/v2/dispatcher-actions.html#dispatch er-actions ● Link to Sample Job Files
  • 9. job definition actions - k64f # Zephyr JOB definition for NXP K64F device_type: 'frdm-k64f' job_name: 'zephyr tutorial 001 - from ppl.l.o' [global timeouts, priority, context blocks omitted] actions: - deploy: timeout: minutes: 3 to: tmpfs images: zephyr: url: 'https://people.linaro.org/~bill.fletcher/zephyr_frdm_k64f.bin' - boot: method: cmsis-dap timeout: minutes: 3 - test: monitors: - name: 'kernel_common' start: (tc_start()|starting test) end: PROJECT EXECUTION pattern: (?P<result>(PASS|FAIL))s-s(?P<test_case_id>w+). fixupdict: PASS: pass FAIL: fail
  • 10. Getting Started with the Lab Instance A quick tour of the web UI https://validation.linaro.org/ “The LAVA Lab” (Cambridge Lab - Production Instance) Large installation with 7 workers 1.6M jobs Reports 118 public devices
  • 11. validation.linaro.org (LAVA Lab) - logging in For access to the lab, mail a request to lava-lab-team@linaro.org
  • 12. WebUI - Scheduler drop-down for general status ● Submit Job - can directly paste a yaml job file here ● View All Jobs, or jobs in various states ● All (Active) Devices ● Reports - Overall health check statistics ● Workers - details of Dispatcher instances
  • 13. WebUI drop-down for job authentication tokens All job submission requires authentication ● Create an authentication token: https://validation.linaro.org/api/tokens/ ● Display the token hash
  • 14. Getting Support/Reporting Issues LAVA Lab Tech Lead: Dave Pigott Support Portal ● "Problems" -> "Report a Problem". Mention “LAVA Lab:" for correct assignment ● Tickets should prominently feature ‘LITE’ in the subject and summary ● Generally please put as much info as possible in the summary ● For e.g. VPN please include public keys with the request LAVA Project Tech Lead: Neil Williams Support Info Mailing list: lava-users@lists.linaro.org Bugs.linaro.org (->LAVA Framework)
  • 15. lava-tool ● the command-line tool for interacting with the various services that LAVA offers using the underlying XML-RPC mechanism ● can also be installed on any machine running a Debian-based distribution, without needing the rest of LAVA ( $ apt-get install lava-tool ) ● allows a user to interact with any LAVA instance on which they have an account ● primarily designed to assist users in manual tasks and uses keyring integration ● Basic useful lava-tool features: $ lava-tool auth-add <user@lava-server> $ lava-tool submit-job <user@lava-server> <job definition file>
  • 16. Using lava-tool to submit a Lava Job ● This example uses a prebuilt image and job definition file ● Use a test image built for a lab-supported platform - in this case frdm-k64f - at https://people.linaro.org/~bill.fletcher/zephyr_frdm_k64f.bin ● Use the yaml job definition file here (complete version … and on next slide) ● Get an authentication token from the web UI and paste into lava-tool as prompted $ lava-tool auth-add https://first.last@validation.linaro.org ● Submit the job $ lava-tool submit-job https://first.last@validation.linaro.org zephyr_k64_job001.yaml ● lava-tool returns the job number if the submission is successful. You can follow the results at https://validation.linaro.org/scheduler/myjobs, finding the job number.
  • 17. # Zephyr JOB definition for NXP K64F device_type: 'frdm-k64f' job_name: 'zephyr tutorial hw test job submission 001 - from ppl.l.o' timeouts: job: minutes: 6 action: minutes: 2 priority: medium visibility: public actions: - deploy: timeout: minutes: 3 to: tmpfs images: zephyr: url: 'https://people.linaro.org/~bill.fletcher/zephyr_frdm_k64f.bin' - boot: method: cmsis-dap timeout: minutes: 3 - test: monitors: - name: 'kernel_common' start: (tc_start()|starting test) end: PROJECT EXECUTION pattern: (?P<result>(PASS|FAIL))s-s(?P<test_case_id>w+). fixupdict: PASS: pass FAIL: fail
  • 18. Anatomy of a test job definition ● Documentation https://validation.linaro.org/static/docs/v2/explain_first_jo b.html ● Example job file k64f-kernel-common (previous slide) ● General details - ○ device_type - used by the Scheduler to match your job to a device ○ job_name - free text appearing in the list of jobs ○ Global timeouts - to detect and fail a hung job ● Context: ○ Used to set values for selected variables in the device configuration. ○ Most commonly, to tell the qemu template e.g. which architecture is being tested ● Test Job actions: Deploy, Boot, Test
  • 19. Deploy action - deploy: timeout: minutes: 3 to: tmpfs images: zephyr: image_arg: '-kernel {zephyr}' url: 'https://... Downloads files required by the job to the dispatcher detailed docs timeout: self-explanatory - can use seconds … to hours ... to: specifies the deploy method image_arg: only needed for jobs that run on qemu Cortex M3 url: the location of the image Many other deploy features not used here: OS awareness, loading test overlays onto rootfs images
  • 20. Boot action - boot: method: cmsis-dap timeout: minutes: 3 Boot the device Detailed docs timeout: self-explanatory method: specifies either the command to run on the dispatcher or the interaction with the bootloader on the target Zephyr specific boot methods: ● cmsis_dap.py ● pyocd.py ● qemu No Parameters? ● The individual board is not known at job submission time, so the Scheduler has to populate the relevant ports, power-reset control I/O etc ● Command line parameters for e.g. pyocd are populated from the device_type template in the Scheduler
  • 21. Test action - test: monitors: - name: 'kernel_common' start: (tc_start()|starting test) end: PROJECT EXECUTION pattern: (?P<result>(PASS|FAIL))s-s(?P<test_case_id>w+). fixupdict: PASS: pass FAIL: fail Execute the required tests monitors: one-way DUT connection - https://git.linaro.org/lava/lava-dispatcher.git/tree/lava_disp atcher/actions/test/monitor.py name: appears in the results output start: string used to detect when the test action starts end: string used to detect when the test action is finished pattern: supplies a parser that converts each test output into results fixupdict: as a default, LAVA understands only “pass”|”fail”|”skip”|”unknown” Sample output to parse: PASS - byteorder_test_memcpy_swap.
  • 22. Looking at LAVA results See what happens when we run the job … ● In the following slides: ● Results ● Job Details ● Timing $ lava-tool submit-job returns the job number... A link to the full trace is here: https://validation.linaro.org/scheduler/job/1656241
  • 25. Job Details - start of Deploy action
  • 26. Job Details - start of Boot action
  • 27. Job Details - Test action parsing - test: monitors: - name: 'kernel_common' start: (tc_start()|starting test) end: PROJECT EXECUTION pattern: (?P<result>(PASS|FAIL))s-s(?P<test_case_id>w+). fixupdict: PASS: pass FAIL: fail (not matched)
  • 28. Job Timing - for timeout tuning, not benchmarking
  • 29. A More Complex Zephyr Test Example Output the Zephyr boot time values as the result of a test, and also that the boot test succeeded (tests/benchmarks/boot_time) tc_start() - Boot Time Measurement Boot Result: Clock Frequency: 12 MHz __start : 0 cycles, 0 us _start->main(): 5030 cycles, 419 us _start->task : 5461 cycles, 455 us _start->idle : 8934 cycles, 744 us Boot Time Measurement finished =================================================================== PASS - main. =================================================================== PROJECT EXECUTION SUCCESSFUL Pipeline: cascade 2 test actions The first test action matches _start->... and picks out the microsecond values The second test action matches PASS and picks out the test case which is main Example Solution: The Job Definition The Results The Measurements
  • 30. Writing Tests ● pattern: expressions need to be compatible with pexpect/re (used by the Dispatcher) ● monitor: is for devices without a unix-style* shell. It handles output only ● monitor: pattern matches can populate named Python regex groups for test_case_id, result, measurement, units ● Obviously tests that need some interaction to boot and/or run can’t be automated with LAVA ● The pattern: syntax has not been designed for complex detailed parsing of test output logs. The expectation was that it would invoke (via a shell) and parse the results of scripts/commands that would do most of the heavy lifting in dealing with test suite output *The Lava Test Shell is used for testing devices that have a unix style shell and a writeable FS.
  • 31. Writing tests - coping strategies ● Most (non-Zephyr) LAVA users craft their test invocation scripts to fit existing pattern: boilerplate ● Prototype pattern: re expressions in an offline python script before trying them in LAVA ● Debug them further in LAVA test actions on an M3 qemu instance first (fast, doesn’t tie up resources, unbreakable) ● The more carefully crafted a pattern: is, the more brittle it will likely be when the Zephyr-side code changes ● Cascading multiple test action blocks can solve more complex parsing problems
  • 32. LAVA and CI Overview LAVA in ci.linaro.org XMLRPC Metadata Job templates
  • 33. Overview - industrializing LAVA Health checks Target requirements Metadata
  • 34. Health Checks & Gold Standard Images ● Health check ○ special type of test job ○ designed to validate a test device and the infrastructure around it ○ run periodically to check for equipment and/or infrastructure failures ○ needs to at least check that the device will boot and deploy a test image. ● Writing Health Checks ○ It has a job name describing the test as a health check ○ It has a minimal set of test definitions ○ It uses gold standard files ● Gold Standard ○ Gold standard has been defined in association with the QA team. ○ Provide a known baseline for test definition writers ○ (open point: are there gold standard images and jobs for LITE target boards?)
  • 35. Sources of Target Board Success ... ● See https://validation.linaro.org/static/docs/v2/device-integration.html section on Device Integration A few LITE-relevant points: ● Serial ○ Persistent, stable ○ if over a shared OTG cable, other traffic does not disrupt trace ● Reset ○ Image data not retained ○ ‘old’ serial data not buffered/retained ● Predictable & repeatable ● No manual intervention
  • 36. Metadata ● Linking a LAVA job and its result artifacts back to the code - not important for ad hoc submission, but vital for CI ● Specific metadata: section within the jobfile ● Can be queried for a job via xmlrpc ● Example API call get_testjob_metadata (job_id) ● Call returns entries created by LAVA as well as submitted in the test job definition ● Example metadata: build-url: $build_url build-log: $build_url/consoleText zephyr-gcc-variant: $gcc_variant platform: $board_name git-url: https://git.linaro.org/zephyrproject-org/zephyr.git git-commit: $git_commit
  • 37. LAVA in ci.linaro.org Jenkins ci.linaro.org LAVA Instance validation.linaro.org job file Test Farm Deploy Boot, Test Output Results? Idealised flow: ● In practice, LAVA jobs are submitted by the QA server which acts as a proxy, not by ci.linaro.orglinaro-cp submit-to-lava Jenkins ci.linaro.org LAVA Instance validation.linaro.org Test Farm Boot, Test Output Results QA Server submit-for-qa ● In either case LAVA is invoked via xmlrpc APIs metadata?
  • 38. Invoking a LAVA job via xmlrpc #!/usr/lib/python import xmlrpclib username = "bill.fletcher" token = "<token string>" hostname = "validation.linaro.org" server = xmlrpclib.ServerProxy("https://%s:%s@%s/RPC2" % (username, token, hostname)) jobfile = open("zephyr_k64_job001.yaml") jobtext = jobfile.read() id = server.scheduler.submit_job(jobtext) print server.scheduler.job_status(id) The above is approximately equivalent to $ lava-tool submit-job ... The API is documented here https://validation.linaro.org/api/help/
  • 39. Creating the jobfile on the fly - templates Uses class string.Template(template) template_file_name = "lava-job-definitions/%s/template.yaml" % (args.device_type, ) test_template = None if os.path.exists(template_file_name): test_template_file = open(template_file_name, "r") test_template = test_template_file.read() test_template_file.close() else: sys.exit(1) replace_dict = dict( build_url=args.build_url, test_url=args.test_url, device_type=args.device_type, board_name=args.board_name, test_name=args.test_name, git_commit=args.git_commit, gcc_variant=args.gcc_variant ) template = Template(test_template) lava_job = template.substitute(replace_dict)
  • 40. Job Templates - actions actions: - deploy: timeout: minutes: 3 to: tmpfs images: zephyr: url: ' $test_url ' - boot: method: pyocd timeout: minutes: 10 - test: timeout: minutes: 10 monitors: - name: ' $test_name ' start: (tc_start()|starting .*test|BOOTING ZEPHYR OS) end: PROJECT EXECUTION pattern: (?P<result>(PASS|FAIL))s-s(?P<test_case_id>w+). fixupdict: PASS: pass FAIL: fail Maybe consider also including pattern: in the template, so that it tracks any changes in the test
  • 41. Job Templates - general, timeouts & metadata # Zephyr JOB definition for frdm-kw41z device_type: ' $device_type ' job_name: 'zephyr-upstream $test_name ' timeouts: job: minutes: 30 action: minutes: 3 actions: wait-usb-device: seconds: 40 priority: medium visibility: public <actions> metadata: build-url: $build_url build-log: $build_url /consoleText zephyr-gcc-variant: $gcc_variant platform: $board_name git-url: https://git.linaro.org/zephyrproject-org/zephyr.git git-commit: $git_commit
  • 42. Thank You #HKG18 HKG18 keynotes and videos on: connect.linaro.org For further information: www.linaro.org