My slides form the session at sitHH at 12th May 2012 about static ABAP code analysis tools and my experience with them. Apart the tools I share my personal lessons learned for establishing a code profiling process.
How to Troubleshoot Apps for the Modern Connected Worker
static ABAP code analyzers
1. Static ABAP code
Analysis
A Comparison of Tools with some kind of field report
Sonntag, 13. Mai 12 1
2. Disclaimer
• First talk at a SIT
• Not a native English speaker
• This presentation represents my personal
opinon and is not related to any company or
the single godly truth
Sonntag, 13. Mai 12 2
4. Who am I?
• Markus Theilen
• Enterprise Architect at EWE ENERGIE AG
• before that Software Architect at BTC AG
• responsible for Customer Care and Billing
for Utilities
Sonntag, 13. Mai 12 4
6. Background
• EWE does not use SAP IS-U for Customer
Care and Billing, but develops its own
solution: easy+
• Since 1995 this solution is built and
maintained by BTC AG on behalf of
EWE ENERGIE AG
Sonntag, 13. Mai 12 6
7. easy+
• Productive use since 1997
• Pure ABAP coding
• Today, about 100 people in development
and maintenance + 20 people in support
• used by EWE and about 10 public services
companies
Sonntag, 13. Mai 12 7
8. easy+: A few facts
• over 25 million invoices billed so far
• 8,8 TByte data volume
• 8.2 million lines of code
• 700 packages, 8,000 reports 6,000 classes
• 8,000 tables
Sonntag, 13. Mai 12 8
9. Problems
• Team size and fluctuation
leads to very
heterogenous knowledge
and skill set
• Maintenance is getting
harder with each iteration
• Too much code for
manual review of coding
guidelines
Sonntag, 13. Mai 12 9
10. Problems
• With code size,
complexity can grow
exponentially
• Code that looks locally
ok can lead to problems
when seen in its context
and call hierarchy
• Complexity is too high
for manual checks
Sonntag, 13. Mai 12 10
11. Problems
• No factual statements
?
about code quality
possible
• No direct indicators for
architects/management to
decide about where to
spend time and money to
correct the most urgent
problems first
Sonntag, 13. Mai 12 11
12. But #1 Problem is:
You do not know what
your problems are, until
you measure your code!
Sonntag, 13. Mai 12 12
13. Use of static analysis
• Gives insight and leads you to your
problems
• Gives possibility to concentrate on hot
spots
• Base decisions on facts, not myths and
rumours!
Sonntag, 13. Mai 12 13
14. Use of static analysis
• tool-based analysis is cheaper than manual
reviews, but it comes with a price and is far
from being perfect!
• false positives, missed violations
• expect no solution for your problems,
tools just help to find and pinpoint them!
• only static information is examined,
mostly no dynamic aspects are covered!
Sonntag, 13. Mai 12 14
15. What a tool should offer
• reliable rule engine
• definition of exceptions / false positives
• explanations of rules
• reasoning, good/bad examples
• seamless integration into development
cycle
Sonntag, 13. Mai 12 15
17. CAST
Application Intelligence
Plattform
(in production)
Sonntag, 13. Mai 12 17
18. What is it?
• developed by CAST, headquaters located in
France
• „world-wide leader in automated
application intelligence“
• not just a simple scanner-and-rules-engine,
but an application metadata knowledge
base
Sonntag, 13. Mai 12 18
19. How it works
• external scanning and analysis engine,
written in C++
• transfer of source information via extraction
report and files
• analysis of source code, mapping to common meta
model, creation of relations between objects
(calls, uses, etc.)
• results are shown in dashboard web application
and in fat client for architecture analysis
Sonntag, 13. Mai 12 19
20. How it works
• CAST uses a customisable hierarchy of
result aggregation
• health factors like robustness,
performance
• quality indicators like complexity,
programming practices, documentation
• quality metrics (basic rules)
Sonntag, 13. Mai 12 20
28. ERGEBNISSE APRIL-2012
27. APRIL 2012
Verletzungen pro kLOC Top 10 Verletzungen
Viol. Diff. Viol. Diff. Metrikname Verletzungen Gewichtet Diff.
Themengebiet
krit krit Total Total
Avoid unchecked return code (SY-SUBRC) after OPEN SQL… 15384 246144 2192
Abrechnen 2,97 -0,01 30,61 0,00 Avoid undocumented Methods 28923 144615 1545
Accounting 2,34 -0,01 47,34 -0,08 Avoid _SELECT *_ or _SELECT SINGLE *_ queries 15662 125296 528
Architektur 2,91 -0,01 42,44 0,03 Avoid using literals in assignments (hardcoded values) 23646 118230 750
BusinessWarehouse 5,20 0,01 33,28 0,03 Avoid unreferenced Methods 23340 93360 2500
CustomerCare 2,62 -0,04 45,68 0,28 Avoid Methods with a very low comment/code ratio 23479 70437 528
Messen 2,93 0,14 40,13 0,29 Avoid using LOOP INTO, use LOOP ASSIGNING instead 11518 69108 492
MPK 1,88 0,01 50,43 0,18 Avoid missing WHEN OTHERS in CASE statements 6511 52088 360
OutputManagement 3,29 0,00 31,18 -0,01
Avoid Artifacts with a Complex SELECT Clause 7545 45270 162
Statistik 3,44 -0,01 34,10 -0,22
Avoid Artifacts with High Cyclomatic Complexity 5028 40224 2624
easy+ 2,69 0,00 40,85 0,08
Die Tabelle zeigt die schwerwiegendsten Verletzungen des easy+. Neben der Verletzungsanzahl ist auch der gewichtete Wert dargestellt,
Diese Tabelle zeigt die Anzahl der kritischen Verletzungen und die der für das Ranking ausschlaggebend ist.
Gesamtanzahl auf tausend Codezeilen innerhalb der Themengebiets.
Beispiel: Ein Viol.krit-Wert von 3 sagt aus, dass in tausend Gewichteter easy+-Qualitätswert und Abnahmestatus
Codezeilen im Durchschnitt 3 kritische Regelverletzungen vorliegen.
Startwert Aktuell Diff. Diff. % Status
CAST Biggest Loser
1388953 1402761 13808 0,99
Rang Themengebiet Startgewicht Aktuell Diff.(%)
Diese Tabelle zeigt den aktuellen Status für das Quality Gate zum
1 Statistik 220,13 218,92 -0,55 Abnahme- bzw. zum Systemintegrationstest (AT bzw. SIT). Ist der aktuelle
Wert kleiner/gleich des Startwerts, ist der Status grün, sonst ist er rot.
2 Accounting 258,62 258,10 -0,20
Alles aufwachen, bitte!
3 Abrechnen 210,84 210,72 -0,06
Des entwicklungsarmen Releasestarts wegen, liegt die letzte CAST-Auswertung mittlerweile fast vier
4 OutputManagement 218,02 217,98 -0,02 Wochen zurück. Diese längere Pause bleibt allerdings künftig wieder die Ausnahme und so werdet Ihr ab
5 Architektur 251,44 251,46 0,01 sofort wieder zwei-wöchentlich über den aktuellen Stand informiert.
6 CustomerCare 270,70 270,97 0,10 Trotz des vergleichsweise geringen Entwicklungsumfangs in diesen ersten Wochen haben sich wieder
7 BusinessWarehouse 244,40 244,74 0,14 diverse Regelverletzungen eingeschlichen, die den Qualitäts-Startwert bereits um fast ein volles Prozent
übersteigen. Spätestens mit diesem Flyer sollten alle Entwickler wieder darauf achten, die CAST-Regeln im
8 MPK 262,63 263,25 0,24 Rahmen ihrer Entwicklung einzuhalten und vorhandene zu entfernen.
9 Messen 252,94 255,03 0,82
Das Rotlicht-Milieu zieht um
easy+ 243,70 243,90 0,08
Mit den ersten vergleichbaren Biggest Loser-Zahlen seit Release 35/2 steht nun fest, dass sich das TG
Diese Tabelle zeigt das CAST Biggest Loser-Startgewicht gemessen am Business Warehouse nach einer neuen Zimmerbeleuchtung umsehen muss, denn die rote Laterne wechselt
Qualitätswert pro tausend Zeilen Quellcode für jedes Themengebiet. den Schreibtisch zu den Kollegen vom TG Messen.
CAST Monthly Flyer
Sonntag, 13. Mai 12 28
29. Unique selling points
• common meta model for development objects
and its relations
• change impact analysis, support for cost
estimation, path finder along call stacks
• cross technology analysis (Java, C#, C++, ABAP,
COBOL...)
• management dashboard and longterm evaluation
• layered, weighted aggregation of results
Sonntag, 13. Mai 12 29
30. Drawbacks
• speed of ABAP analysis (easy+: 14-17 h)
• no sound ABAP know-how up to now
• some unstable, heurisitic rules
• big initial and ongoing invest
• license, maintenance, education, administration
• no integration in ABAP development cycle
• sluggish support and information policy
Sonntag, 13. Mai 12 30
32. Virtual Forge
CodeProfiler
(in examination)
Sonntag, 13. Mai 12 32
33. What is it?
• developed by Virtual Forge GmbH
• THE ABAP security experts
• scans ABAP code, checks against rules and
presents the results
• concentrates on ABAP analysis in security,
compliance, performance and robustness
Sonntag, 13. Mai 12 33
34. How it works
• external scanning and analysis engine,
written in Java
• transfer of source information directly
via RFC or file-based
• results are generated as PDF or
shown in SAP (Tx „Finding Manager“)
• uses SAP BI for management views
Sonntag, 13. Mai 12 34
36. 2 Executive Summary
2 Executive Summary
The ABAP code has been analyzed with 100 test cases. 55 of those test cases yielded
findings, totaling 28825 findings. 1535 of them have been rated as critical. The findings are
distributed as follows:
Critical Findings Total Findings # Analyzed
Test Domain
# ME # ME Testcases
Security 186 N/A 3493 N/A 48
Compliance 292 N/A 2069 N/A 7
Performance 938 N/A 9148 N/A 19
Maintainability 0 N/A 10593 N/A 11
Robustness 119 N/A 3522 N/A 10
Data-Loss- 0 N/A 0 N/A 5
Prevention
* ME = Mitigation Effort (N/A = Not completely configured)
Also, 0 countermeasures have been detected that have prevented additional security
findings. Please note that 0 findings have been manually suppressed by developers.
Some test cases are used for informational purposes only. These yielded 856 findings as a
basis for further analysis by experts.
Example PDF report
Sonntag, 13. Mai 12 -2- 36
38. Unique selling points
• Speed of analysis (easy+: 1.5 h)
• data and control flow analysis
• automated corrections possible
• superb rules in security domain
• sound integration in development cycle
• SE80, TMS, ChaRm, CTS(+)
Sonntag, 13. Mai 12 38
39. Drawbacks
• not that strong in code quality rules yet
• performance, maintenance, robustness
• no support for other languages than ABAP
Sonntag, 13. Mai 12 39
42. What is it?
• developed by SAP, in ABAP-OO
• scans ABAP code, checks against rules and
presents the results as tree or list
• integrated into every AS ABAP
• Transactions SCI and SCII
Sonntag, 13. Mai 12 42
43. How it works
• internal scanning and rule checking,
implemented in ABAP-OO
• no need to transfer development objects, it
all stays in the system
• results can be analyzed in TX SCI
Sonntag, 13. Mai 12 43
51. CI reports you should
know about
• RS_CI_EMAIL:
sends emails with inspection results to
developers that own violating objects
• RS_CI_EMAILTEMPLATE:
template for this email
• RS_CI_INSPECTOR:
plan inspections as background jobs
Sonntag, 13. Mai 12 47
52. CI reports you should
know about
• RS_CI_DIFF:
diff between two versions
of an inspection, send diff
per email
• RS_CI_COMPARE:
diff between two
inspections
Sonntag, 13. Mai 12 48
53. Unique selling points
• build by same vendor as AS ABAP
• integrated into ABAP system
• no additional hardware or software
needed
• API to call in custom code and extend
with own rules
Sonntag, 13. Mai 12 49
54. Unique selling points
• no additional license and maitenance
costs
• strong rules in performance domain
• good integration in development cyclce
• stable rules, seldom false positives
Sonntag, 13. Mai 12 50
55. Drawbacks
• no dashboard or BI integration
• very few cross-object rules
• performance:
• some rules consume a lot of memory
• contains some very slow checks
Sonntag, 13. Mai 12 51
58. What is it?
• developed by SonarSource and Obeo
• scans ABAP code, checks against 50+ rules
and presents the results in a nice
dashboard
• integrated into inspection plattform Sonar
Sonntag, 13. Mai 12 54
59. How it works
• scanning and analysing is implemented in
Java, needs JRE/JDK and RDBMS
• code of objects needs to be exported into
files, folder structure defines result
structure
• results are presented in Sonar dashboards
Sonntag, 13. Mai 12 55
65. Unique selling points
• ease of installation, administration
• moderate costs
• entry to the fabulous Sonar plattform
• plugins like Views, SQUALE etc.
• configurable dashboards with myriads
of views
• extensibility
Sonntag, 13. Mai 12 61
66. Drawbacks
• no source code extractor out of the box
• trying to change this
• small rule base yet (V1.1)
• integration of Code Inspector results in
examination
• no real inhouse ABAP know-how
• analysis runs still break without proper error
documentation in current version
Sonntag, 13. Mai 12 62
69. Things to know before
establishing static code analysis
• Not everyone is very fond of transparency!
• Talk to your workers‘ council early, if there is one!
• Be aware of „benchmark optimisations“!
• correcting „for the tool“ can have negative
impact
• Be aware of the impact of false positives!
• trust in tools fades with each of it
Sonntag, 13. Mai 12 65
70. Things to do when
establishing static code analysis
• start small, grow large
• activate one check after the other
• start with new code, then spread by
packages
• integrate analysis results into developers
daily routine (IDE, TMS)
• exclude generated ABAP coding
Sonntag, 13. Mai 12 66
71. Things to do when
establishing static code analysis
• have a working QA process established
before starting tool integration
• integrate analysis results into SLAs
• integrate analysis results into manager‘s
targets
• make them pay for not giving you the
space to build great software!
Sonntag, 13. Mai 12 67
72. Things to do when
establishing static code analysis
• try to keep the whole
process fun and
entertaining for
developers
• do not overload the
change process
Sonntag, 13. Mai 12 68
73. Things to do when
establishing static code analysis
• try to keep the whole
process fun and
entertaining for
developers
• do not overload the
change process
Sonntag, 13. Mai 12 68
74. Things to do when
establishing static code analysis
• try to keep the whole
process fun and
entertaining for
developers
• do not overload the
change process
Sonntag, 13. Mai 12 68
75. Things to do when
establishing static code analysis
• try to keep the whole
process fun and
entertaining for
developers
• do not overload the
change process
Sonntag, 13. Mai 12 68
76. Things to do when
establishing static code analysis
• try to keep the whole
process fun and
entertaining for
developers
• do not overload the
change process
Sonntag, 13. Mai 12 68
77. Things to do when
establishing static code analysis
• try to keep the whole
process fun and
entertaining for
developers
• do not overload the
change process
Sonntag, 13. Mai 12 68
78. Things to do when
establishing static code analysis
• try to keep the whole
process fun and
entertaining for
developers
• do not overload the
change process
Sonntag, 13. Mai 12 68
79. Architektur 2,91 -0,01 42,44 0,03 Avoid using lit
BusinessWarehouse 5,20 0,01 33,28 0,03 Avoid unrefer
Keep it entertaining:
CustomerCare 2,62 -0,04 45,68 0,28 Avoid Method
Messen 2,93 0,14 40,13 0,29 Avoid using L
MPK 1,88 0,01 50,43 0,18 Avoid missing
OutputManagement 3,29 0,00 31,18 -0,01
„Biggest Looser“
Avoid Artifacts
Statistik 3,44 -0,01 34,10 -0,22
Avoid Artifacts
easy+ 2,69 0,00 40,85 0,08
Die Tabelle zeigt
Diese Tabelle zeigt die Anzahl der kritischen Verletzungen und die der für das Rankin
Gesamtanzahl auf tausend Codezeilen innerhalb der Themengebiets.
Beispiel: Ein Viol.krit-Wert von 3 sagt aus, dass in tausend
Codezeilen im Durchschnitt 3 kritische Regelverletzungen vorliegen.
• At start of a new CAST Biggest Loser
release, the sum of Rang Team
Themengebiet Start
Startgewicht Cur.
Aktuell Dif
Diff.(%)
D
weighted violations per 1 Statistik 220,13 218,92 -0,55 A
W
1k code lines is 2
3
Accounting
Abrechnen
258,62
210,84
258,10
210,72
-0,20
-0,06
A
measured per 4 OutputManagement 218,02 217,98 -0,02
D
W
development team 5 Architektur 251,44 251,46 0,01 s
6 CustomerCare 270,70 270,97 0,10 T
•
7 BusinessWarehouse 244,40 244,74 0,14 d
With every snapshot 8 MPK 262,63 263,25 0,24
ü
R
this rating is recalculated 9 Messen 252,94 255,03 0,82
D
easy+ 243,70 243,90 0,08
M
• At end of release, the
Diese Tabelle zeigt das CAST Biggest Loser-Startgewicht gemessen am
Qualitätswert pro tausend Zeilen Quellcode für jedes Themengebiet.
B
d
best team gets an award
Sonntag, 13. Mai 12 69
80. Keep it entertaining:
„The Red Lantern“
• With every snapshot
there is a rating of
development teams
(„Biggest Looser“)
• The team with the
highest degradation
since baseline gets the
Red Lantern on its team
leader‘s desk
Sonntag, 13. Mai 12 70
83. A fool with a tool...
• the best working tool for static ABAP code
analysis is...
• you, the ABAP expert!
• Integrate tools when code and team sizing
grow beyond manual review capabilities
Sonntag, 13. Mai 12 72
84. My personal, biased
advice
• Want to get into tool-based code analysis,
no money to spend for external tools:
=> SAP Code Inspector
• Substantial code bases in technologies
other than ABAP, cross-technology analysis
a must, more than rules engine needed,
lots of money to spend:
=> CAST AIP
Sonntag, 13. Mai 12 73
85. My personal, biased
advice
• Die-hard ABAP development, security and
compliance is a big concern, results near to
developers a must, a little bit of money on
the bench:
=> Virtual Forge Code Profiler
• Lots of Java code and a little bit of ABAP,
small budget, no need for deep ABAP
coverage now:
=> keep an eye on Sonar ABAP plug-in
Sonntag, 13. Mai 12 74
86. VirtualForge SAP Sonar
Criteria CAST AIP
CodeProfiler Code Inspector ABAP Plug-In
ease of
administration -- + ++ +
management
dashboards ++ + -- +
costs -- - ++ O-+
overall technical
quality O + + O
support for
other languages ++ -- -- ++
analysis
performance -- ++ - --
long-time
evaluation,
trends
++ + -- ++
integration in
development
process
-- + + --
Sonntag, 13. Mai 12 75
87. VirtualForge SAP Sonar
Criteria CAST AIP
CodeProfiler Code Inspector ABAP Plug-In
hardware
requirements -- O ++ O
Extendable with
own rules yes/O no yes/+ no
rule
documentation + ++ O O
Sonntag, 13. Mai 12 76
106. Exclude generated
maintenance views
• In object set you can Edit
exclude generated exclude Maint.View
functions groups for
maintenance views
• These function groups
cause a lot of violations
that you should not
bother about
Sonntag, 13. Mai 12 95