SlideShare une entreprise Scribd logo
1  sur  24
Télécharger pour lire hors ligne
WALT vs PELT : Redux
Pavankumar Kondeti
Engineer, Staff
Qualcomm Technologies, Inc
09/27/2017
pkondeti@qti.qualcomm.com
2
Agenda
• Why we need load tracking?
• A quick recap of PELT and WALT
• Results
3
Why we need load tracking?
• Load balancer/group scheduling
• Task placement
◦ Task load and CPU utilization tracking is essential for energy aware scheduling.
◦ Heavy tasks can be placed on higher capacity CPUs. Small tasks can be packed on a busy CPU
without waking up an idle CPU.
• Frequency guidance
◦ CPU frequency governors like ondemand and interactive uses a timer and compute the CPU
busy time by subtracting the CPU idle time from the timer period.
◦ Task migrations are not accounted. If a task execution time is split across two CPUs, governor
fails to ramp up the frequency.
4
PELT recap
• The PELT (Per Entity Load Tracking) is introduced in 3.8 kernel. The load is tracked at the sched
entity and cfs_rq/rq level
• The load is accounted using a decayed geometric series with runnable time in 1 msec period as
coefficients. The decay constant is chosen such that the contribution in the 32 msec past is
weighted half as strongly as the current contribution.
• The blocked tasks also contribute to the rq load/utilization. The task load is decayed during sleep.
• Currently the load tracking is done only for the CFS class. There are patches available to extend it
to RT/DL class.
5
PELT signals
• se->avg.load_avg - The load average contribution of an entity. The entity weight and
frequency scaling are factored into the runnable time.
• cfs_rq->runnable_load_avg - Sum of the load average contributions from all runnable tasks.
Used in load balancer.
• cfs_rq->avg.load_avg - Represents the load average contributions from all runnable and
blocked entities. Used in CFS group scheduling
• se->avg.util_avg - The utilization of an entity. The CPU efficiency and frequency scaling are
factored into the running time. Used in EAS task placement
• cfs_rq->avg.util_avg - Represents the utilization from all runnable and blocked entities.
Input to the schedutil governor for frequency selection.
6
WALT Introduction
• WALT keeps track of recent N windows of execution for every task. Windows where a task had no
activity are ignored and not recorded.
• Task demand is derived from these N samples. Different policies like max(), avg(), max(recent,
avg) are available.
• The wait time of a task on the CPU rq is also accounted towards its demand.
• Task utilization is tracked separately from the demand. The utilization of a task is the execution
time in the recently completed window.
• The CPU rq’s utilization is the sum of the utilization of all tasks ran in the recently completed
window.
7
WALT signals
• p->ravg.demand - Task demand. Used in the EAS task placement.
• rq->cumulative_runnable_avg - Sum of the demand of all runnable tasks on this CPU. Represent
the Instantaneous load. Used in the EAS task placement
• rq->prev_runnable_sum - CPU utilization in the recent complete window. Input to the schedutil.
• rq->cum_window_demand - Sum of the demand of tasks ran in the current window. This signal is
used to estimate the frequency. Used in the EAS task placement for evaluating the energy
difference
8
PELT vs WALT
• WALT has more dynamic demand/utilization signal than provided by PELT, which is subjected to
the geometric series.
• PELT takes more time to detect a heavy task or re-classification of a light task as heavy. Thus
the task migration to a higher capacity CPU is delayed. As example, it takes ~138msec for a task
with 0 utilization to become a 95% utilization task.
• PELT decays the utilization of a task when it goes to sleep. As an example, a 100% utilization
task would become 10% utilization task just after 100 msec sleep.
• Similar to task utilization, PELT takes more time to build up the CPU utilization, thus delaying the
frequency ramp up.
• PELT's blocked utilization decay implies that the underlying cpufreq governor would have to
delay dropping frequency for longer than necessary
9
Frequency ramp up
Task Execution
Frequency (WALT)
Task Execution
Frequency (PELT)
• The frequency ramp up is 4 times faster on WALT
• The CPU utilization is limited by the current capacity. If the frequency is not ramped up
fast, the utilization accumulation is delayed which in turn delays ramping up to the Fmax.
10
Frequency ramp down
Task Execution
Frequency (WALT)
Task Execution
Frequency (PELT)
• The frequency ramp down is 8 times faster on WALT
• The blocked tasks also contribute to the CPU utilization in PELT. This delays frequency
ramp down. It hurts power but may boost the performance in some use cases.
11
Results
• Benchmarks
• JankBench
• Ux (Twitter, Facebook, Youtube)
• Games
12
Testing platform
• The tests are run on the SDM835 platform
• The EAS + schedutil is based on the AOSP common kernel. Few additional patches which are
available in AOSP gerrit review are included.
• HZ = 300 and WALT window size is 10 msec
• The schedtune boost for top-app is 10% by default. It is boosted to 50% upon touch event for 3
seconds.
13
Benchmark Result
GeekBench (v4) ST same
MT same
PCMark (v1.2) Total WALT is 23% better
Web browsing WALT is 38% better
Video Playback PELT is 4% better
Writing WALT is 33% better
Photo Editing WALT is 32% better
Antutu (v6) same
AndroBench (v5) Sequential Read same
Sequential Write Same
Random Read WALT is 51%
better
WALT accounts task wait time towards
demand. This results in task spreading
which improved performance. With
schedtune.prefer_idle = 1, the both PELT
and WALT are giving same result.
Random Write WALT is 11%
better
14
Frequency ramp up in PCMark
PELT
WALT• The faster frequency ramp up is helping WALT in PCMark
• Better results are observed with Patrick’s util_est series for PELT. The gap is reduced to
10% from 23%
15
List View
Fling
PELT
(msec)
WALT
(msec)
25% 5.541621 5.046939
50% 6.235489 5.364740
75% 8.211237 5.858221
90% 9.225127 6.928594
99% 11.848338 10.306270
Image List
View Fling
PELT
(msec)
WALT
(msec)
25% 5.170975 4.950513
50% 5.715942 5.348545
75% 6.817395 5.850904
90% 8.662801 6.625814
99% 11.443108 9.915054
Test Power
List View Fling PELT is 2% better
Image List View Fling PELT is 4% better
16
Shadow
Grid Fling
PELT
(msec)
WALT
(msec)
25% 6.500720 6.219871
50% 7.525394 7.932580
75% 9.514976 9.256707
90% 10.416724 10.443205
99% 12.531525 13.334770
High
hitrate text
render
PELT
(msec)
WALT
(msec)
25% 5.663946 5.161756
50% 6.003140 5.444418
75% 6.726914 6.053425
90% 8.065195 7.008964
99% 12.561744 11.277686
Test Power
Shadow Grid Fling PELT is 7% better
High hitrate text render PELT is 3% better
17
Low
hitrate text
render
PELT
(msec)
WALT
(msec)
25% 5.596932 5.106558
50% 5.945192 5.388022
75% 6.695878 5.898033
90% 8.143912 6.888017
99% 12.639909 10.600410
Edit text
input
PELT
(msec)
WALT
(msec)
25% 5.566935 4.477202
50% 6.977525 5.445546
75% 9.407826 6.955690
90% 14.892524 9.764885
99% 21.150100 16.515769
Test Power
Low hitrate text render PELT is 3% better
Edit text input PELT is 6% better
18
BitMap
upload
(overdraw)
PELT
(msec)
WALT
(msec)
25% 15.136069 14.489167
50% 15.908737 15.844335
75% 16.836846 18.725650
90% 18.500224 21.474493
99% 22.062164 28.835235
Test Power
BitMap upload
(OverDraw)
PELT is 1% better
19
WALT has 5% better power with similar performance (no of janks). Frame
rendering times are better with PELT
20
WALT has 5% better power with similar performance (no of janks). Frame
rendering times are better with PELT
21
Both PELT and WALT played the video with 30 Fps. WALT has 4% better power
22
Games
Game Result
Subway Surfers same
With touchboost disable, Power is 10% better for
WALT. PELT has better performance (~2FPs)
Thunder Cross Power is 3% better for WALT for the same
performance measured in FPS.
Candy crush soda saga same
23
Conclusion
• There is no clear winner in UX use cases like JankBench. The performance (especially
90% and 99%) is better with WALT. The touchboost settings can be tuned down to save
power. Note that for UX use cases, rendering frame with in 16 msec is critical. The power
optimizations are not possible if we can’t meet this deadline.
• Clear Power savings with WALT for Youtube streaming.
• WALT performance is better in Benchmarks like PCMark.
• We will do this comparison again with Patrick’s util_est patches, which are meant to
address the load decay and slower ramp up problems with PELT.
Follow us on:
For more information, visit us at:
www.qualcomm.com & www.qualcomm.com/blog
Thank you
Nothing in these materials is an offer to sell any of the components or devices referenced herein.
©2017 Qualcomm Technologies, Inc. and/or its affiliated companies. All Rights Reserved.
Qualcomm is a trademark of Qualcomm Incorporated, registered in the United States and other countries. Other products and brand names
may be trademarks or registered trademarks of their respective owners.
References in this presentation to “Qualcomm” may mean Qualcomm Incorporated, Qualcomm Technologies, Inc., and/or other subsidiaries
or business units within the Qualcomm corporate structure, as applicable. Qualcomm Incorporated includes Qualcomm’s licensing business,
QTL, and the vast majority of its patent portfolio. Qualcomm Technologies, Inc., a wholly-owned subsidiary of Qualcomm Incorporated,
operates, along with its subsidiaries, substantially all of Qualcomm’s engineering, research and development functions, and substantially all
of its product and services businesses, including its semiconductor business, QCT.

Contenu connexe

Tendances

Linux Memory Management
Linux Memory ManagementLinux Memory Management
Linux Memory Management
Ni Zo-Ma
 

Tendances (20)

SFO15-TR9: PSCI, ACPI (and UEFI to boot)
SFO15-TR9: PSCI, ACPI (and UEFI to boot)SFO15-TR9: PSCI, ACPI (and UEFI to boot)
SFO15-TR9: PSCI, ACPI (and UEFI to boot)
 
Memory management in Linux kernel
Memory management in Linux kernelMemory management in Linux kernel
Memory management in Linux kernel
 
Virtualization Support in ARMv8+
Virtualization Support in ARMv8+Virtualization Support in ARMv8+
Virtualization Support in ARMv8+
 
Linux Memory Management
Linux Memory ManagementLinux Memory Management
Linux Memory Management
 
Process Scheduler and Balancer in Linux Kernel
Process Scheduler and Balancer in Linux KernelProcess Scheduler and Balancer in Linux Kernel
Process Scheduler and Balancer in Linux Kernel
 
Bootstrap process of u boot (NDS32 RISC CPU)
Bootstrap process of u boot (NDS32 RISC CPU)Bootstrap process of u boot (NDS32 RISC CPU)
Bootstrap process of u boot (NDS32 RISC CPU)
 
Kdump and the kernel crash dump analysis
Kdump and the kernel crash dump analysisKdump and the kernel crash dump analysis
Kdump and the kernel crash dump analysis
 
Linux Locking Mechanisms
Linux Locking MechanismsLinux Locking Mechanisms
Linux Locking Mechanisms
 
Intel® RDT Hands-on Lab
Intel® RDT Hands-on LabIntel® RDT Hands-on Lab
Intel® RDT Hands-on Lab
 
Understanding DPDK
Understanding DPDKUnderstanding DPDK
Understanding DPDK
 
Linux Preempt-RT Internals
Linux Preempt-RT InternalsLinux Preempt-RT Internals
Linux Preempt-RT Internals
 
The Linux Block Layer - Built for Fast Storage
The Linux Block Layer - Built for Fast StorageThe Linux Block Layer - Built for Fast Storage
The Linux Block Layer - Built for Fast Storage
 
The Theory and Implementation of DVFS on Linux
The Theory and Implementation of DVFS on LinuxThe Theory and Implementation of DVFS on Linux
The Theory and Implementation of DVFS on Linux
 
Scheduling in Android
Scheduling in AndroidScheduling in Android
Scheduling in Android
 
What are latest new features that DPDK brings into 2018?
What are latest new features that DPDK brings into 2018?What are latest new features that DPDK brings into 2018?
What are latest new features that DPDK brings into 2018?
 
LAS16-307: Benchmarking Schedutil in Android
LAS16-307: Benchmarking Schedutil in AndroidLAS16-307: Benchmarking Schedutil in Android
LAS16-307: Benchmarking Schedutil in Android
 
Memory model
Memory modelMemory model
Memory model
 
Linux kernel
Linux kernelLinux kernel
Linux kernel
 
Introduction to DPDK
Introduction to DPDKIntroduction to DPDK
Introduction to DPDK
 
Tegra 186のu-boot & Linux
Tegra 186のu-boot & LinuxTegra 186のu-boot & Linux
Tegra 186のu-boot & Linux
 

Similaire à WALT vs PELT : Redux - SFO17-307

Optimizing your java applications for multi core hardware
Optimizing your java applications for multi core hardwareOptimizing your java applications for multi core hardware
Optimizing your java applications for multi core hardware
IndicThreads
 
CS 301 Computer ArchitectureStudent # 1 EID 09Kingdom of .docx
CS 301 Computer ArchitectureStudent # 1 EID 09Kingdom of .docxCS 301 Computer ArchitectureStudent # 1 EID 09Kingdom of .docx
CS 301 Computer ArchitectureStudent # 1 EID 09Kingdom of .docx
faithxdunce63732
 
참여기관_발표자료-국민대학교 201301 정기회의
참여기관_발표자료-국민대학교 201301 정기회의참여기관_발표자료-국민대학교 201301 정기회의
참여기관_발표자료-국민대학교 201301 정기회의
DzH QWuynh
 
Performance Test Plan - Sample 1
Performance Test Plan - Sample 1Performance Test Plan - Sample 1
Performance Test Plan - Sample 1
Atul Pant
 

Similaire à WALT vs PELT : Redux - SFO17-307 (20)

KVM Tuning @ eBay
KVM Tuning @ eBayKVM Tuning @ eBay
KVM Tuning @ eBay
 
Optimizing your java applications for multi core hardware
Optimizing your java applications for multi core hardwareOptimizing your java applications for multi core hardware
Optimizing your java applications for multi core hardware
 
XPDDS18: Real Time in XEN on ARM - Andrii Anisov, EPAM Systems Inc.
XPDDS18: Real Time in XEN on ARM - Andrii Anisov, EPAM Systems Inc.XPDDS18: Real Time in XEN on ARM - Andrii Anisov, EPAM Systems Inc.
XPDDS18: Real Time in XEN on ARM - Andrii Anisov, EPAM Systems Inc.
 
CPN302 your-linux-ami-optimization-and-performance
CPN302 your-linux-ami-optimization-and-performanceCPN302 your-linux-ami-optimization-and-performance
CPN302 your-linux-ami-optimization-and-performance
 
CS 301 Computer ArchitectureStudent # 1 EID 09Kingdom of .docx
CS 301 Computer ArchitectureStudent # 1 EID 09Kingdom of .docxCS 301 Computer ArchitectureStudent # 1 EID 09Kingdom of .docx
CS 301 Computer ArchitectureStudent # 1 EID 09Kingdom of .docx
 
Performance Tuning Oracle Weblogic Server 12c
Performance Tuning Oracle Weblogic Server 12cPerformance Tuning Oracle Weblogic Server 12c
Performance Tuning Oracle Weblogic Server 12c
 
Dynamic task scheduling on multicore automotive ec us
Dynamic task scheduling on multicore automotive ec usDynamic task scheduling on multicore automotive ec us
Dynamic task scheduling on multicore automotive ec us
 
DYNAMIC TASK SCHEDULING ON MULTICORE AUTOMOTIVE ECUS
DYNAMIC TASK SCHEDULING ON MULTICORE AUTOMOTIVE ECUSDYNAMIC TASK SCHEDULING ON MULTICORE AUTOMOTIVE ECUS
DYNAMIC TASK SCHEDULING ON MULTICORE AUTOMOTIVE ECUS
 
DYNAMIC TASK SCHEDULING ON MULTICORE AUTOMOTIVE ECUS
DYNAMIC TASK SCHEDULING ON MULTICORE AUTOMOTIVE ECUSDYNAMIC TASK SCHEDULING ON MULTICORE AUTOMOTIVE ECUS
DYNAMIC TASK SCHEDULING ON MULTICORE AUTOMOTIVE ECUS
 
Using AWR for SQL Analysis
Using AWR for SQL AnalysisUsing AWR for SQL Analysis
Using AWR for SQL Analysis
 
Oracle Performance Tuning DE(v1.2)-part2.ppt
Oracle Performance Tuning DE(v1.2)-part2.pptOracle Performance Tuning DE(v1.2)-part2.ppt
Oracle Performance Tuning DE(v1.2)-part2.ppt
 
DevoxxUK: Optimizating Application Performance on Kubernetes
DevoxxUK: Optimizating Application Performance on KubernetesDevoxxUK: Optimizating Application Performance on Kubernetes
DevoxxUK: Optimizating Application Performance on Kubernetes
 
z/VM Performance Analysis
z/VM Performance Analysisz/VM Performance Analysis
z/VM Performance Analysis
 
Oracle R12 EBS Performance Tuning
Oracle R12 EBS Performance TuningOracle R12 EBS Performance Tuning
Oracle R12 EBS Performance Tuning
 
참여기관_발표자료-국민대학교 201301 정기회의
참여기관_발표자료-국민대학교 201301 정기회의참여기관_발표자료-국민대학교 201301 정기회의
참여기관_발표자료-국민대학교 201301 정기회의
 
Steps to identify ONTAP latency related issues
Steps to identify ONTAP latency related issuesSteps to identify ONTAP latency related issues
Steps to identify ONTAP latency related issues
 
Improving PHP Application Performance with APC
Improving PHP Application Performance with APCImproving PHP Application Performance with APC
Improving PHP Application Performance with APC
 
Performance Test Plan - Sample 1
Performance Test Plan - Sample 1Performance Test Plan - Sample 1
Performance Test Plan - Sample 1
 
OOW16 - Getting Optimal Performance from Oracle E-Business Suite [CON6711]
OOW16 - Getting Optimal Performance from Oracle E-Business Suite [CON6711]OOW16 - Getting Optimal Performance from Oracle E-Business Suite [CON6711]
OOW16 - Getting Optimal Performance from Oracle E-Business Suite [CON6711]
 
Px execution in rac
Px execution in racPx execution in rac
Px execution in rac
 

Plus de Linaro

Deep Learning Neural Network Acceleration at the Edge - Andrea Gallo
Deep Learning Neural Network Acceleration at the Edge - Andrea GalloDeep Learning Neural Network Acceleration at the Edge - Andrea Gallo
Deep Learning Neural Network Acceleration at the Edge - Andrea Gallo
Linaro
 
HPC network stack on ARM - Linaro HPC Workshop 2018
HPC network stack on ARM - Linaro HPC Workshop 2018HPC network stack on ARM - Linaro HPC Workshop 2018
HPC network stack on ARM - Linaro HPC Workshop 2018
Linaro
 
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
Linaro
 
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
Linaro
 
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainlineHKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
Linaro
 
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainlineHKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
Linaro
 
HKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
HKG18- 115 - Partitioning ARM Systems with the Jailhouse HypervisorHKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
HKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
Linaro
 
HKG18-TR08 - Upstreaming SVE in QEMU
HKG18-TR08 - Upstreaming SVE in QEMUHKG18-TR08 - Upstreaming SVE in QEMU
HKG18-TR08 - Upstreaming SVE in QEMU
Linaro
 
HKG18-120 - Devicetree Schema Documentation and Validation
HKG18-120 - Devicetree Schema Documentation and Validation HKG18-120 - Devicetree Schema Documentation and Validation
HKG18-120 - Devicetree Schema Documentation and Validation
Linaro
 
HKG18-223 - Trusted FirmwareM: Trusted boot
HKG18-223 - Trusted FirmwareM: Trusted bootHKG18-223 - Trusted FirmwareM: Trusted boot
HKG18-223 - Trusted FirmwareM: Trusted boot
Linaro
 

Plus de Linaro (20)

Deep Learning Neural Network Acceleration at the Edge - Andrea Gallo
Deep Learning Neural Network Acceleration at the Edge - Andrea GalloDeep Learning Neural Network Acceleration at the Edge - Andrea Gallo
Deep Learning Neural Network Acceleration at the Edge - Andrea Gallo
 
Arm Architecture HPC Workshop Santa Clara 2018 - Kanta Vekaria
Arm Architecture HPC Workshop Santa Clara 2018 - Kanta VekariaArm Architecture HPC Workshop Santa Clara 2018 - Kanta Vekaria
Arm Architecture HPC Workshop Santa Clara 2018 - Kanta Vekaria
 
Huawei’s requirements for the ARM based HPC solution readiness - Joshua Mora
Huawei’s requirements for the ARM based HPC solution readiness - Joshua MoraHuawei’s requirements for the ARM based HPC solution readiness - Joshua Mora
Huawei’s requirements for the ARM based HPC solution readiness - Joshua Mora
 
Bud17 113: distribution ci using qemu and open qa
Bud17 113: distribution ci using qemu and open qaBud17 113: distribution ci using qemu and open qa
Bud17 113: distribution ci using qemu and open qa
 
OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018
OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018
OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018
 
HPC network stack on ARM - Linaro HPC Workshop 2018
HPC network stack on ARM - Linaro HPC Workshop 2018HPC network stack on ARM - Linaro HPC Workshop 2018
HPC network stack on ARM - Linaro HPC Workshop 2018
 
It just keeps getting better - SUSE enablement for Arm - Linaro HPC Workshop ...
It just keeps getting better - SUSE enablement for Arm - Linaro HPC Workshop ...It just keeps getting better - SUSE enablement for Arm - Linaro HPC Workshop ...
It just keeps getting better - SUSE enablement for Arm - Linaro HPC Workshop ...
 
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
 
Yutaka Ishikawa - Post-K and Arm HPC Ecosystem - Linaro Arm HPC Workshop Sant...
Yutaka Ishikawa - Post-K and Arm HPC Ecosystem - Linaro Arm HPC Workshop Sant...Yutaka Ishikawa - Post-K and Arm HPC Ecosystem - Linaro Arm HPC Workshop Sant...
Yutaka Ishikawa - Post-K and Arm HPC Ecosystem - Linaro Arm HPC Workshop Sant...
 
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
 
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainlineHKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
 
HKG18-100K1 - George Grey: Opening Keynote
HKG18-100K1 - George Grey: Opening KeynoteHKG18-100K1 - George Grey: Opening Keynote
HKG18-100K1 - George Grey: Opening Keynote
 
HKG18-318 - OpenAMP Workshop
HKG18-318 - OpenAMP WorkshopHKG18-318 - OpenAMP Workshop
HKG18-318 - OpenAMP Workshop
 
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainlineHKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
 
HKG18-315 - Why the ecosystem is a wonderful thing, warts and all
HKG18-315 - Why the ecosystem is a wonderful thing, warts and allHKG18-315 - Why the ecosystem is a wonderful thing, warts and all
HKG18-315 - Why the ecosystem is a wonderful thing, warts and all
 
HKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
HKG18- 115 - Partitioning ARM Systems with the Jailhouse HypervisorHKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
HKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
 
HKG18-TR08 - Upstreaming SVE in QEMU
HKG18-TR08 - Upstreaming SVE in QEMUHKG18-TR08 - Upstreaming SVE in QEMU
HKG18-TR08 - Upstreaming SVE in QEMU
 
HKG18-113- Secure Data Path work with i.MX8M
HKG18-113- Secure Data Path work with i.MX8MHKG18-113- Secure Data Path work with i.MX8M
HKG18-113- Secure Data Path work with i.MX8M
 
HKG18-120 - Devicetree Schema Documentation and Validation
HKG18-120 - Devicetree Schema Documentation and Validation HKG18-120 - Devicetree Schema Documentation and Validation
HKG18-120 - Devicetree Schema Documentation and Validation
 
HKG18-223 - Trusted FirmwareM: Trusted boot
HKG18-223 - Trusted FirmwareM: Trusted bootHKG18-223 - Trusted FirmwareM: Trusted boot
HKG18-223 - Trusted FirmwareM: Trusted boot
 

Dernier

Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
vu2urc
 

Dernier (20)

Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Developing An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilDeveloping An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of Brazil
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 

WALT vs PELT : Redux - SFO17-307

  • 1. WALT vs PELT : Redux Pavankumar Kondeti Engineer, Staff Qualcomm Technologies, Inc 09/27/2017 pkondeti@qti.qualcomm.com
  • 2. 2 Agenda • Why we need load tracking? • A quick recap of PELT and WALT • Results
  • 3. 3 Why we need load tracking? • Load balancer/group scheduling • Task placement ◦ Task load and CPU utilization tracking is essential for energy aware scheduling. ◦ Heavy tasks can be placed on higher capacity CPUs. Small tasks can be packed on a busy CPU without waking up an idle CPU. • Frequency guidance ◦ CPU frequency governors like ondemand and interactive uses a timer and compute the CPU busy time by subtracting the CPU idle time from the timer period. ◦ Task migrations are not accounted. If a task execution time is split across two CPUs, governor fails to ramp up the frequency.
  • 4. 4 PELT recap • The PELT (Per Entity Load Tracking) is introduced in 3.8 kernel. The load is tracked at the sched entity and cfs_rq/rq level • The load is accounted using a decayed geometric series with runnable time in 1 msec period as coefficients. The decay constant is chosen such that the contribution in the 32 msec past is weighted half as strongly as the current contribution. • The blocked tasks also contribute to the rq load/utilization. The task load is decayed during sleep. • Currently the load tracking is done only for the CFS class. There are patches available to extend it to RT/DL class.
  • 5. 5 PELT signals • se->avg.load_avg - The load average contribution of an entity. The entity weight and frequency scaling are factored into the runnable time. • cfs_rq->runnable_load_avg - Sum of the load average contributions from all runnable tasks. Used in load balancer. • cfs_rq->avg.load_avg - Represents the load average contributions from all runnable and blocked entities. Used in CFS group scheduling • se->avg.util_avg - The utilization of an entity. The CPU efficiency and frequency scaling are factored into the running time. Used in EAS task placement • cfs_rq->avg.util_avg - Represents the utilization from all runnable and blocked entities. Input to the schedutil governor for frequency selection.
  • 6. 6 WALT Introduction • WALT keeps track of recent N windows of execution for every task. Windows where a task had no activity are ignored and not recorded. • Task demand is derived from these N samples. Different policies like max(), avg(), max(recent, avg) are available. • The wait time of a task on the CPU rq is also accounted towards its demand. • Task utilization is tracked separately from the demand. The utilization of a task is the execution time in the recently completed window. • The CPU rq’s utilization is the sum of the utilization of all tasks ran in the recently completed window.
  • 7. 7 WALT signals • p->ravg.demand - Task demand. Used in the EAS task placement. • rq->cumulative_runnable_avg - Sum of the demand of all runnable tasks on this CPU. Represent the Instantaneous load. Used in the EAS task placement • rq->prev_runnable_sum - CPU utilization in the recent complete window. Input to the schedutil. • rq->cum_window_demand - Sum of the demand of tasks ran in the current window. This signal is used to estimate the frequency. Used in the EAS task placement for evaluating the energy difference
  • 8. 8 PELT vs WALT • WALT has more dynamic demand/utilization signal than provided by PELT, which is subjected to the geometric series. • PELT takes more time to detect a heavy task or re-classification of a light task as heavy. Thus the task migration to a higher capacity CPU is delayed. As example, it takes ~138msec for a task with 0 utilization to become a 95% utilization task. • PELT decays the utilization of a task when it goes to sleep. As an example, a 100% utilization task would become 10% utilization task just after 100 msec sleep. • Similar to task utilization, PELT takes more time to build up the CPU utilization, thus delaying the frequency ramp up. • PELT's blocked utilization decay implies that the underlying cpufreq governor would have to delay dropping frequency for longer than necessary
  • 9. 9 Frequency ramp up Task Execution Frequency (WALT) Task Execution Frequency (PELT) • The frequency ramp up is 4 times faster on WALT • The CPU utilization is limited by the current capacity. If the frequency is not ramped up fast, the utilization accumulation is delayed which in turn delays ramping up to the Fmax.
  • 10. 10 Frequency ramp down Task Execution Frequency (WALT) Task Execution Frequency (PELT) • The frequency ramp down is 8 times faster on WALT • The blocked tasks also contribute to the CPU utilization in PELT. This delays frequency ramp down. It hurts power but may boost the performance in some use cases.
  • 11. 11 Results • Benchmarks • JankBench • Ux (Twitter, Facebook, Youtube) • Games
  • 12. 12 Testing platform • The tests are run on the SDM835 platform • The EAS + schedutil is based on the AOSP common kernel. Few additional patches which are available in AOSP gerrit review are included. • HZ = 300 and WALT window size is 10 msec • The schedtune boost for top-app is 10% by default. It is boosted to 50% upon touch event for 3 seconds.
  • 13. 13 Benchmark Result GeekBench (v4) ST same MT same PCMark (v1.2) Total WALT is 23% better Web browsing WALT is 38% better Video Playback PELT is 4% better Writing WALT is 33% better Photo Editing WALT is 32% better Antutu (v6) same AndroBench (v5) Sequential Read same Sequential Write Same Random Read WALT is 51% better WALT accounts task wait time towards demand. This results in task spreading which improved performance. With schedtune.prefer_idle = 1, the both PELT and WALT are giving same result. Random Write WALT is 11% better
  • 14. 14 Frequency ramp up in PCMark PELT WALT• The faster frequency ramp up is helping WALT in PCMark • Better results are observed with Patrick’s util_est series for PELT. The gap is reduced to 10% from 23%
  • 15. 15 List View Fling PELT (msec) WALT (msec) 25% 5.541621 5.046939 50% 6.235489 5.364740 75% 8.211237 5.858221 90% 9.225127 6.928594 99% 11.848338 10.306270 Image List View Fling PELT (msec) WALT (msec) 25% 5.170975 4.950513 50% 5.715942 5.348545 75% 6.817395 5.850904 90% 8.662801 6.625814 99% 11.443108 9.915054 Test Power List View Fling PELT is 2% better Image List View Fling PELT is 4% better
  • 16. 16 Shadow Grid Fling PELT (msec) WALT (msec) 25% 6.500720 6.219871 50% 7.525394 7.932580 75% 9.514976 9.256707 90% 10.416724 10.443205 99% 12.531525 13.334770 High hitrate text render PELT (msec) WALT (msec) 25% 5.663946 5.161756 50% 6.003140 5.444418 75% 6.726914 6.053425 90% 8.065195 7.008964 99% 12.561744 11.277686 Test Power Shadow Grid Fling PELT is 7% better High hitrate text render PELT is 3% better
  • 17. 17 Low hitrate text render PELT (msec) WALT (msec) 25% 5.596932 5.106558 50% 5.945192 5.388022 75% 6.695878 5.898033 90% 8.143912 6.888017 99% 12.639909 10.600410 Edit text input PELT (msec) WALT (msec) 25% 5.566935 4.477202 50% 6.977525 5.445546 75% 9.407826 6.955690 90% 14.892524 9.764885 99% 21.150100 16.515769 Test Power Low hitrate text render PELT is 3% better Edit text input PELT is 6% better
  • 18. 18 BitMap upload (overdraw) PELT (msec) WALT (msec) 25% 15.136069 14.489167 50% 15.908737 15.844335 75% 16.836846 18.725650 90% 18.500224 21.474493 99% 22.062164 28.835235 Test Power BitMap upload (OverDraw) PELT is 1% better
  • 19. 19 WALT has 5% better power with similar performance (no of janks). Frame rendering times are better with PELT
  • 20. 20 WALT has 5% better power with similar performance (no of janks). Frame rendering times are better with PELT
  • 21. 21 Both PELT and WALT played the video with 30 Fps. WALT has 4% better power
  • 22. 22 Games Game Result Subway Surfers same With touchboost disable, Power is 10% better for WALT. PELT has better performance (~2FPs) Thunder Cross Power is 3% better for WALT for the same performance measured in FPS. Candy crush soda saga same
  • 23. 23 Conclusion • There is no clear winner in UX use cases like JankBench. The performance (especially 90% and 99%) is better with WALT. The touchboost settings can be tuned down to save power. Note that for UX use cases, rendering frame with in 16 msec is critical. The power optimizations are not possible if we can’t meet this deadline. • Clear Power savings with WALT for Youtube streaming. • WALT performance is better in Benchmarks like PCMark. • We will do this comparison again with Patrick’s util_est patches, which are meant to address the load decay and slower ramp up problems with PELT.
  • 24. Follow us on: For more information, visit us at: www.qualcomm.com & www.qualcomm.com/blog Thank you Nothing in these materials is an offer to sell any of the components or devices referenced herein. ©2017 Qualcomm Technologies, Inc. and/or its affiliated companies. All Rights Reserved. Qualcomm is a trademark of Qualcomm Incorporated, registered in the United States and other countries. Other products and brand names may be trademarks or registered trademarks of their respective owners. References in this presentation to “Qualcomm” may mean Qualcomm Incorporated, Qualcomm Technologies, Inc., and/or other subsidiaries or business units within the Qualcomm corporate structure, as applicable. Qualcomm Incorporated includes Qualcomm’s licensing business, QTL, and the vast majority of its patent portfolio. Qualcomm Technologies, Inc., a wholly-owned subsidiary of Qualcomm Incorporated, operates, along with its subsidiaries, substantially all of Qualcomm’s engineering, research and development functions, and substantially all of its product and services businesses, including its semiconductor business, QCT.