9. External quality characteristics are those parts of a product
that face its users, where internal quality characteristics are
those that do not
Steve Mc Connell
a product's quality is a function of how much it changes
the world for the better.
Tom DeMarco
33. public void run(ProjectFile pf) {
// We do not support directories
if (pf.getIsDirectory()) {
return;
}
InputStream in = fds.getFileContents(pf);
if (in == null) {
return;
}
// Create an input stream from the project file's content
try {
// Measure the number of lines in the project file
LineNumberReader lnr =
new LineNumberReader(new InputStreamReader(in));
int lines = 0;
while (lnr.readLine() != null) {
lines++;
}
lnr.close();
// Store the results
Metric metric = Metric.getMetricByMnemonic("LOC");
ProjectFileMeasurement locm = new ProjectFileMeasurement();
locm.setMetric(metric);
locm.setProjectFile(pf);
locm.setWhenRun(new Timestamp(System.currentTimeMillis()));
locm.setResult(String.valueOf(lines));
db.addRecord(locm);
} catch (IOException e) {
log.error(this.getClass().getName() + " IO Error <" + e
+ "> while measuring: " + pf.getFileName());
}
}
In large organisations, software quality evaluations are performed by dedicated teams that assess a project&#x2019;s conformance to the specifications\n
This may sound like a rhetorical question, but it is not. \n
Then there is the problem of format disparity. Most research so far was done with CVS and Bugzilla. In reality, there are a lot of tools that store process data \n
Data sizes for large experiments are very large\n
All those reasons led our research group to the SQO-OSS project, which produced the Alitheia Core tool. The project&#x2019;s aim was to produce software quality analysis tools, but the original targets strayed towards creating infrastructures rather than the tools themselves\n
All those reasons led our research group to the SQO-OSS project, which produced the Alitheia Core tool. The project&#x2019;s aim was to produce software quality analysis tools, but the original targets strayed towards creating infrastructures rather than the tools themselves\n
Alitheia Core is a resear metric plugin-based architecture and a corresponding engine\n
new product and process software metrics that take advantage of this infrastructure\n
\n
\n
as you can see &#x2026;\n\nthin brings concrete objects\nfat combines multiple elements (metadata with data, and from multiple data sources)\n
Here is how the most important parts work together. The project data is maintained on a project mirror, which is managed externally though a set of perl scripts. The metadata updater component uses this data to extract metadata and put it in the DB. Metrics use higher level services that combine raw data and metadata to get their input and store directly in the DB\n
What alitheia core tries to do is to maintain a database of metadata and metric results. \nThe schema is hierarchical and everything is bound to a project. \n \n
The metadata update process is a two stage process\n
In the first stage the system gets raw data (source code logs, mail messages, and bug reports) and extracts just enough meta data to create abstract representations in the database. No real data goes in the database, just metadata.\n
In the second stage, the metadata are used to extract semantic relationships among metadata entities. For example, to extract bug numbers from commit messages and create links between the to or to recover developer identities or to reconstruct mailing list threads\n
\n
A metric defines an activation type which is an entity in the metadata schema I have presented earlier and also declares depencencies to other metrics. After that the system will automatically invoke the metric when it finds new instances of the activation type after an update. Let&#x2019;s see an example\n
\n
This is the implementation of the line counting plug-in I &#x2018;ve presented you earlier on, minus some bureaucracy (constructors, imports etc). Just about 20 Locs to retrieve a file from a revision, read its lines and store results in a database. This is comparable to a shell or python script, but it is way much faster. The abstractions Alitheia Core are very high level and cross platform and this is why the overhead it adds in algorithm implementation is minimal\n
\n
\n
\n
Let us first walk through the life-cycle of a metric plugin\n
Specifically, let&#x2019;s now see how we can calculate a metric (wc in this case) for a simple file\n
Updater sees that a file has changed for a given method and calls the run method for the LoC metric\n
and later on the scheduler runs the job\n
The job obtains the details about the object (file, mailing list, bug) it has to calculate \n
and updates the database with the result\n
and now let us take a walkthrough of the corresponding client\n
Client asks a metric for a given file version\n
Web service calls the plugin administrator to obtain a reference for the specific plugin\n
Given the reference in can then call the metric\n
The metric accesses the database to obtain the results in a metric-native format\n
and the web service returns the results to the client\n
SQO-OSS is designed to handle hundreds of projects, tens of thousands of files, hundreds thousands of revisions, messages, and bug reports.\nThis means it is architected for performance.\nLet me give you an example\n
A very crude example is the retrieval of metadata for the live files in a particular version. In Alitheia Core this a database query which executes in about 2 secs, even under load\n
The corresponding time for checking out XBMC. This small example shows how the metadata storage in the database pays off\n
The workloads we are processing are embarrassingly parallel due to the fact that projects are independent from each other.\n The execution path from the mirror to the metadata database is lock free. Only plug-ins that process data across projects need to be careful when updating a shared field in the DB. This allow us to scale almost for free.\nWe implemented a very simple clustering system that binds a project to the node it was registered from and does not allow any other type of processing to take place on other nodes. \n
This is our cluster: Left a 3 thread processing node + project server, middle a 6 thread processing core, bottom mirroring and storage of raw data, file server, database and 8 processing threads to keep the CPUs busy while I/O, right web server + 16 slow processing threads. To scale, we just need more processing nodes.\n
Given enough data to process, this simple setup can drive all machines involved to their knees. For example this screenshot displays info from the db server while all 3 machines (node, web server, db server) run metrics. 2.35k queries per second, 46 open transactions, 32 active jobs on the web server. All three machines combined can eat up more than 500 wc jobs per second or 50 contribution plugin jobs per second.\n
Another example of the cluster on its knees - Linear scalability display: The screen of the database server running just the database while other nodes start connecting: Queries Per Second increase is almost linear and load equals to processor cores -> the machine is saturated. \n
\n
The SQO-OSS web site or the Eclipse plug-in display metrics on various quality metrics immediately after each commit! For example, imagine you are writting some code and when you commit you immediately see its effects on various metrics (e.g. Coupling, dependencies) on the whole system! Or when you do a montly code inspection to be able to immediately recognise bad code smells....\n
.... And be automatically notified when bad code smells start to build up or on the other hand when the system is doing well and is probably ready for release\n
Other possibilities include: test case integration, finding classes with no test cases, integration of various custom-developed tools for monitoring progress. \n\n
SQO-OSS is platform not just a web site. It was build to allow easy integration of tools, already have C++ and Python based interfaces. One can write plug-ins in either Java or C++ or Python. In fact we have wc in 3 languages :-) A text output parser plug-in is possible by executing the program in C++ or Java.\n
Convenient access to data, Data preprocessed in relational format. DB dumps will be made available from our web site. \n