The document discusses tools for improving CFML code quality through static code analysis, specifically focusing on the CFLint tool which analyzes CFML code by checking it against configurable linting rules to identify potential issues, it also discusses other metrics and tools for measuring code quality factors like complexity, performance and maintainability.
7. CONDITION OF EXCELLENCE IMPLYING
FINE QUALITY
AS DISTINCT FROM BAD QUALITY
https://www.flickr.com/photos/serdal/14863608800/
8. SOFTWARE AND CODE QUALITY
TYPES OF QUALITY
▸ Quality can be objective or subjective
▸ Subjective quality: dependent on personal experience to recognise
excellence. Subjective quality is ‘universally true’ from the observer’s point of
view.
▸ Objective quality: measure ‘genius’, quantify and repeat -> Feedback loop
9. SOFTWARE AND CODE QUALITY
CAN RECOGNISING QUALITY BE LEARNED?
▸ Chicken sexing seems to be something industry professionals lack objective
criteria for
▸ Does chicken sexing as process of quality determination lead to subjective or
objective quality?
▸ What about code and software?
▸ How can we improve in determining objective quality?
10. “ANY FOOL CAN WRITE CODE THAT A COMPUTER CAN
UNDERSTAND. GOOD PROGRAMMERS WRITE CODE
THAT HUMANS CAN UNDERSTAND.”
Martin Fowler
SOFTWARE AND CODE QUALITY
13. METRICS AND MEASURING
DIFFERENT TYPES OF METRICS
▸ There are various categories to measure software quality in:
▸ Completeness
▸ Performance
▸ Aesthetics
▸ Maintainability and Support
▸ Usability
▸ Architecture
14. METRICS AND MEASURING
COMPLETENESS
▸ Fit for purpose
▸ Code fulfils requirements: use cases,
specs etc.
▸ All tests pass
▸ Tests cover all/most of the code
execution
▸ Security
https://www.flickr.com/photos/chrispiascik/4792101589/
15. METRICS AND MEASURING
PERFORMANCE
▸ Artefact size and efficiency
▸ System resources
▸ Behaviour under load
▸ Capacity limitations
https://www.flickr.com/photos/dodgechallenger1/2246952682/
16. METRICS AND MEASURING
AESTHETICS
▸ Readability of code
▸ Matches agreed coding style guides
▸ Organisation of code in a class/
module/component etc.
https://www.flickr.com/photos/nelljd/25157456300/
17. METRICS AND MEASURING
MAINTAINABILITY / SUPPORT
▸ Future maintenance of the code
▸ Documentation
▸ Stability/Lifespan
▸ Scalability
https://www.flickr.com/photos/dugspr/512883136/
18. METRICS AND MEASURING
USABILITY
▸ Positive user experience
▸ Positive reception
▸ UI leveraging best practices
▸ Support for impaired users
https://www.flickr.com/photos/baldiri/5734993652/
19. METRICS AND MEASURING
ARCHITECTURE
▸ System complexity
▸ Module cohesion
▸ Module dependency
https://www.flickr.com/photos/mini_malist/14416440852/
21. “YOU CAN'T CONTROL WHAT YOU CAN'T
MEASURE.”
Tom DeMarco
METRICS AND MEASURING
22. METRICS AND MEASURING
WHY WOULD WE WANT TO MEASURE ELEMENTS OF QUALITY?
▸ It’s impossible to add quality later, start early to:
▸ identify potential technical debt
▸ find and fix bugs early in the development work
▸ track your test coverage.
23. METRICS AND MEASURING
COST OF FIXING ISSUES
▸ Rule of thumb: The later you find a
problem in your software the more
effort, time and money is involved in
fixing it.
▸ Note: There has NEVER been any
scientific study into what the
appropriate ratios are - it’s all
anecdotal/made up numbers… the
zones of unscientific fluffiness!
24. METRICS AND MEASURING
HOW CAN WE MEASURE?
▸ Automated vs. manual
▸ Tools vs. humans
▸ Precise numeric values vs. ‘gut feeling’
… but what about those ‘code smells’?
25. METRICS AND MEASURING
WHAT CAN WE MEASURE?
▸ Certain metric categories lend themselves to being taken at design/code/
architecture level
▸ Others might have to be dealt with on others levels, e.g. acceptance criteria, ’fit
for purpose’, user happiness, etc.
26. METRICS AND MEASURING
COMPLETENESS
▸ Fit for purpose — Stakeholders/customers/users
▸ Code fulfils requirements: use cases, specs etc — BDD (to some level)
▸ All tests pass — TDD/BDD/UI tests
▸ Tests cover all/most of the code execution? — Code Coverage tools
▸ Security — Code security scanners
27. METRICS AND MEASURING
PERFORMANCE
▸ Artefact size and efficiency — Deployment size
▸ System resources — Load testing/System monitoring
▸ Behaviour under load — Load testing/System monitoring
▸ Capacity limitations — Load testing/System monitoring
28. METRICS AND MEASURING
AESTHETICS
▸ Readability of code — Code style checkers (to some level) & Human review
▸ Matches agreed coding style guides — Code style checkers
▸ Organisation of code in a class|module|component etc. — Architecture checks &
Human review
29. METRICS AND MEASURING
MAINTAINABILITY/SUPPORT
▸ Future maintenance of the code — Code style checkers & Human review
▸ Documentation — Documentation tools
▸ Stability/Lifespan — System monitoring
▸ Scalability — System monitoring/Architecture checks
30. METRICS AND MEASURING
USABILITY
▸ Positive user experience — UI/AB tests & Human review
▸ Positive reception — Stakeholders/customers/users
▸ UI leveraging best practices — UI/AB tests & Human review
▸ Support for impaired users — a11y checker & UI/AB tests & Human review
32. METRICS AND MEASURING
LINES OF CODE
▸ LOC: lines of code
▸ CLOC: commented lines of code
▸ NCLOC: not commented lines of code
▸ LLOC: logic lines of code
LOC = CLOC + NCLOC
LLOC <= NCLOC
33. METRICS AND MEASURING
COMPLEXITY
▸ McCabe (cyclomatic) counts number of decision points in a function: if/else,
switch/case, loops, etc.
▸ low: 1-4, normal: 5-7, high: 8-10, very high: 11+
▸ nPath tracks number of unique execution paths through a function
▸ values of 150+ are usually considered too high
▸ McCabe usually much small value than nPath
▸ Halstead metrics, lean into maintainability index metric - quite involved calculation
38. “THE PROBLEM WITH ‘QUICK AND DIRTY’ FIXES
IS THAT THE DIRTY STAYS AROUND FOREVER
WHILE THE QUICK HAS BEEN FORGOTTEN”
Common wisdom among software developers
TOOLING AND ANALYSIS
39. TOOLING AND ANALYSIS
TOOLING
▸ Testing: TDD/BDD/Spec tests, UI tests, user tests, load tests
▸ System management & monitoring
▸ Security: Intrusion detection, penetration testing, code scanner
▸ Code and architecture reviews and style checkers
40. TOOLING AND ANALYSIS
CODE ANALYSIS
▸ Static analysis: checks code that is not currently being executed
▸ Linter, syntax checking, style checker, architecture tools
▸ Dynamic/runtime analysis: checks code while being executed
▸ Code coverage, system monitoring
Test tools can fall into either category
41. TOOLING AND ANALYSIS
TOOLS FOR STATIC ANALYSIS
▸ CFLint: Linter, checking code by going through a set of rules
▸ CFML Complexity Metric Tool: McCabe index
42. TOOLING AND ANALYSIS
TOOLS FOR DYNAMIC ANALYSIS
▸ Rancho: Code coverage from Kunal Saini
▸ CF Metrics: Code coverage and statistics
44. STATIC CODE ANALYSIS FOR CFML
A STATIC CODE ANALYSER FOR CFML
▸ Started by Ryan Eberly
▸ Sitting on top of Denny Valiant's CFParser project
▸ Mission statement:
▸ ‘Provide a robust, configurable and extendable linter for CFML’
▸ Currently works with ACF and Lucee, main line of support is for ACF though
▸ Team of 4-5 regular contributors
45. STATIC CODE ANALYSIS FOR CFML
CFLINT
▸ Written in Java, requires Java 8+ to compile and run
▸ Unit tests can be contributed/executed without Java knowledge
▸ CFLint depends on CFParser to grok the code to analyse
▸ Various tooling/integration through 3rd party plugins
▸ Source is on Github
▸ Built with Gradle, distributed via Maven
47. STATIC CODE ANALYSIS FOR CFML
LINTING (I)
▸ CFLint traverses the source tree depth first:
▸ Component → Function → Statement → Expression → Identifier
▸ CFLint maintains its own scope during listing:
▸ Curent directory/filename
▸ Current component
▸ Current function
▸ Variables that are declared/attached to the scope
48. STATIC CODE ANALYSIS FOR CFML
LINTING (II)
▸ The scope is called the CFLint Context
▸ Provided to linting plugins
▸ Plugins do the actual work and feed reporting information back to CFLint
based on information in the Context and the respective plugin
▸ TLDR: plugins ~ liniting rules
49. STATIC CODE ANALYSIS FOR CFML
CFPARSER
▸ CFParser parses CFML code using two different approaches:
▸ CFML Tags: Jericho HTMLParser
▸ CFScript: ANTLR 4 grammar
▸ Output: AST (abstract syntax tree) of the CFML code
▸ CFLint builds usually rely on a certain CFParser release
▸ CFML expressions, statements and tags end up in CFLint being represented as
Java classes: CFStatement, CFExpression etc.
50. STATIC CODE ANALYSIS FOR CFML
REPORTING
▸ Currently four output formats:
▸ Text-based for Human consumption
▸ JSON object
▸ CFLint XML
▸ FindBugs XML
51. STATIC CODE ANALYSIS FOR CFML
TOOLING
▸ Various IDE and CI server integrations
▸ 3rd party projects: SublimeLinter (Sublime Text 3), ACF Builder extension,
AtomLinter (Atom), Visual Studio Code
▸ IntelliJ IDEA coming later this year or early 2018 — from me
▸ Jenkins plugin
▸ TeamCity (via Findbugs XML reporting)
▸ SonarQube
▸ NPM wrapper
52. STATIC CODE ANALYSIS FOR CFML
CONTRIBUTING
▸ Use CFLint with your code and provide feedback
▸ Talk to us and say hello!
▸ Provide test cases in CFML for issues you find
▸ Work on some documentation improvements
▸ Fix small and beginner-friendly CFLint tickets in Java code
▸ Become part of the regular dev team! :-)
53. STATIC CODE ANALYSIS FOR CFML
ROADMAP
▸ 1.0.1 — March 2017; first release after 2 years of betas :)
▸ 1.1 — June 2017; internal release
▸ 1.2.0-3 — August 2017
▸ Documentation/output work
▸ Internal changes to statistics tracking
▸ 1.3 — In progress; parsing/linting improvements, CommandBox
54. STATIC CODE ANALYSIS FOR CFML
ROADMAP
▸ 2.0 — 2018
▸ Complete rewrite of output and reporting
▸ Complete rewrite and clean up of configuration
▸ Performance improvements (parallelising linting)
▸ API for tooling
▸ Code metrics
55. STATIC CODE ANALYSIS FOR CFML
ROADMAP
▸ 3.0 — ???
▸ Support for rules in CFML
▸ Abstract internal DOM
▸ New rules based on the DOM implementation