Metrics Reference Guide

This guide describes the historic and transient metric providers, as well as factoids, provided by the Scava platform.


Historic Metric Providers

Historic metrics maintain a record of various heuristics associated with a specific open source project over its lifetime. They typically depend on the results from one or more transient metrics and are typically displayed in the Scava dashboards.

Historic Metric Providers for Bug Trackers

The following Historic Metric Providers are associated with Issue trackers

Back to top


org.eclipse.scava.metricprovider.historic.bugs.bugs

This metric computes the number of bugs per day for each bug tracker seperately. It also computes additional information such as average comments per bug, average comments per user, average requests and/or replies per user and bug.

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.bugs.comments

This metric computes the number of bug comments submitted by the community (users) per day for each bug tracker.

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.bugs.emotions

This metric computes the emotional dimensions present in bug comments submitted by the community (users) per day for each bug tracker. Emotion can be 1 of 6 (anger, fear, joy, sadness, love or surprise).

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.bugs.migrationissues

This metric stores how many migration issues have been found per day for each bug tracker.

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.bugs.migrationissuesmaracas

This metric stores how many migration issues have been found containing changes detected with MARACAS per day for each bug tracker.

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.bugs.newbugs

This metric computes the number of new bugs reported by the community (users) per day for each bug tracker. A small number of bug reports can indicate either a bug-free, robust project or a project with a small/inactive user community.

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.bugs.newusers

This metric computes the number of new users per day for each bug tracker seperately.

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.bugs.opentime

This metric computes the average duration between creating and closing bugs. Format: dd:HH:mm:ss:SS, where dd=days, HH:hours, mm=minutes, ss:seconds, SS=milliseconds.

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.bugs.patches

This class computes the number of bug patches per day, for each bug tracker seperately.

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.bugs.requestsreplies

This metric computes the number of requests and replies realting to comments posted to bugs by the community (users) per day for each bug tracker seperately.

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.bugs.requestsreplies.average

This metric computes the average number of bug comments considered as request and reply for each bug tracker per day.

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.bugs.responsetime

This metric computes the average time in which the community (users) responds to open bugs per day for each bug tracker seperately. Format: dd:HH:mm:ss:SS, where dd=days, HH:hours, mm=minutes, ss:seconds, SS=milliseconds.

Variable Type
bugTrackerId String
avgResponseTimeFormatted String
cumulativeAvgResponseTimeFormatted String
avgResponseTime float
cumulativeAvgResponseTime float
bugsConsidered int
cumulativeBugsConsidered int

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.bugs.sentiment

This metric computes the overall sentiment per bug tracker up to the processing date. The overall sentiment score could be -1 (negative sentiment), 0 (neutral sentiment) or +1 (positive sentiment). In the computation, the sentiment score for each bug contributes equally, regardless of it's size.

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.bugs.severity

This metric computes the number of severity levels for bugs submitted by the community (users) every day for each bug tracker. Specifically, it calculates the number and percentage of bugs that have been categorised into 1 of 8 severity levels (blocker, critical, major, minor, enhancement, normal, trivial, unknown). A bug severity is considered unknown if there is not enough information for the classifier to make a decision. For example, an unanswered bug with no user comment to analyse.

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.bugs.severitybugstatus

This metric computes the total number and percentage of each bug status per severity level, in bugs submitted every day, per bug tracker. There are 7 bug status (ResolvedClosed, WontFix, WorksForMe, NonResolvedClosed, Invalid, Fixed, Duplicate) and 8 severity levels (blocker, critical, major, minor, enhancement, normal, trivial, unknown). A bug severity is considered unknown if there is not enough information for the classifier to make a decision. For example, an unanswered bug with no user comment to analyse.

Additional Information :

Visualisation Output :

Back to top


org.eclipse.scava.metricprovider.historic.bugs.severityresponsetime

This metric computes the average time in which the community (users) responds to open bugs per severity level per day for each bug tracker. Format: dd:HH:mm:ss:SS, where dd=days, HH:hours, mm=minutes, ss:seconds, SS=milliseconds. Note: there are 8 severity levels (blocker, critical, major, minor, enhancement, normal, trivial, unknown). A bug severity is considered unknown if there is not enough information for the classifier to make a decision. For example, an unanswered bug with no user comment to analyse.

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.bugs.severitysentiment

This metric computes for each bug severity level, the average sentiment, sentiment at the begining and end of bug comments posted by the community (users) every day for each bug tracker. Sentiment score could be closer to -1 (negative sentiment), 0 (neutral sentiment) or +1 (positive sentiment). There are 8 severity levels (blocker, critical, major, minor, enhancement, normal, trivial, unknown). A bug severity is considered unknown if there is not enough information for the classifier to make a decision. For example, an unanswered bug with no user comment to analyse.

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.bugs.status

This metric computes the total number of bugs that corresponds to each bug status, in bugs submitted every day, per bug tracker. There are 7 bug status (ResolvedClosed, WontFix, WorksForMe, NonResolvedClosed, Invalid, Fixed, Duplicate).

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.bugs.topics

This metric computes the labels of topic clusters extracted from bug comments submitted by the community (users), per bug tracker.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.historic.bugs.unansweredbugs

This metric computes the number of unanswered bugs per day.

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.bugs.users

This metric computes the number of users, number of active and inactive users per day for each bug tracker separately.

Additional Information :

Visualisation Output Information :

Back to top


Historic Metric Providers for Newsgroups and Forums

The following Historic Metric Providers are associated with newsgroups.

Back to top


org.eclipse.scava.metricprovider.historic.newsgroups.articles

This metric computes the number of articles submitted by the community (users) per day for each newsgroup separately

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.newsgroups.emotions

This metric computes the emotional dimensions present in newsgroup comments submitted by the community (users) per day for each newsgroup. Emotion can be 1 of 6 (anger, fear, joy, sadness, love or surprise).

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.newsgroups.migrationissues

This metric detects migration issues in articles per day for each newgroup.

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.newsgroups.migrationissuesmaracas

This metric stores how many migration issues have been found containing changes detected with MARACAS per day for each newsgroup.

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.newsgroups.newthreads

This metric computes the number of new threads submitted by the community (users) per day for each newsgroup separately

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.newsgroups.newusers

This metric computes the number of new users per day for each newsgroup seperately.

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.newsgroups.requestsreplies

This metric computes the number of requests and replies in newsgroup articles submitted by the community (users) per day for each newsgroup separately.

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.newsgroups.requestsreplies.average

This metric computes the average number of newsgroup articles, including the number of requests and replies within the newsgroup articles per day.

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.newsgroups.responsetime

This metric computes the average time in which the community responds to open threads per day for each newsgroup separately. Format: dd:HH:mm:ss:SS, where dd=days, HH:hours, mm=minutes, ss:seconds, SS=milliseconds.

Visualisation Output Information :


org.eclipse.scava.metricprovider.historic.newsgroups.sentiment

This metric computes the overall sentiment per newsgroup repository up to the processing date. The overall sentiment score could be -1 (negative sentiment), 0 (neutral sentiment) or +1 (positive sentiment). In the computation, the sentiment score of each thread contributes equally, irrespective of its size.

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.newsgroups.severity

This metric computes the number of each severity levels in threads submitted every day, per newsgroup. There are 7 severity levels (blocker, critical, major, minor, enhancement, normal, trivial).

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.newsgroups.severityresponsetime

This metric computes the average time in which the community (users) responds to open threads per severity level per day for each bug tracker. Format: dd:HH:mm:ss:SS, where dd=days, HH:hours, mm=minutes, ss:seconds, SS=milliseconds. Note: there are 7 severity levels (blocker, critical, major, minor, enhancement, normal, trivial).

Average response time to threads per severity level

This metric computes the average response time for newsgroup threads submitted every day, based on their severity levels.

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.newsgroups.severitysentiment

This metric computes the average sentiment, the sentiment at the beginning of threads and the sentiment at the end of threads; for each severity level in newsgroup threads submitted every day. Sentiment can be -1 (negative sentiment), 0 (neutral sentiment) or +1 (positive sentiment). Note: there are 7 severity levels (blocker, critical, major, minor, enhancement, normal, trivial).

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.newsgroups.threads

This metric computes the number of threads per day for each newsgroup separately. The metric also computes average values for articles per thread, requests per thread, replies per thread, articles per user, requests per user and replies per user.

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.newsgroups.topics

This metric computes the labels of topics clusters in articles submitted by the community (users), for each newsgroup seperately.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.historic.newsgroups.unansweredthreads

This metric computes the number of unanswered threads per day for each newsgroup separately.

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.newsgroups.users

This metric computes the number of users, including active and inactive users per day for each newsgroup separately.

Additional Information :

Visualisation Output Information :

Back to top


Historic Metric Providers for Commits and Committers

The following Historic Metric Providers are related to the commits and committers of a project.

Back to top


trans.rascal.activecommitters.committersoverfile.historic

Calculates the gini coefficient of committers per file

Back to top


trans.rascal.activecommitters.percentageOfWeekendCommits.historic

Percentage of commits made during the weekend

Back to top


trans.rascal.activecommitters.commitsPerDeveloper.historic

The number of commits per developer indicates not only the volume of the contribution of an individual but also the style in which he or she commits, when combined with other metrics such as churn. Few and big commits are different from many small commits. This metric is used downstream by other metrics as well.

Back to top


trans.rascal.activecommitters.numberOfActiveCommittersLongTerm.historic

Number of long time active committers over time (active in last year). This measures a smooth window of one year, where every day we report the number of developers active in the previous 365 days.

Back to top


trans.rascal.activecommitters.numberOfActiveCommitters.historic

Number of active committers over time (active in last two weeks). This measures a smooth window of two weeks, where every day we report the number of developers in the previous 14 days.

Back to top


rascal.generic.churn.commitsToday.historic

Counts the number of commits made today.

Back to top


rascal.generic.churn.churnToday.historic

Counts the churn for today: the total number of lines of code added and deleted. This metric is used further downstream to analyze trends.

Back to top


rascal.generic.churn.churnPerCommitInTwoWeeks.historic

The ratio between the churn and the number of commits indicates how large each commit is on average. We compute this as a sliding average over two weeks which smoothens exceptions and makes it possible to see a trend historically. Commits should not be to big all the time, because that would indicate either that programmers are not focusing on well-defined tasks or that the system architecture does not allow for separation of concerns.

Back to top


rascal.generic.churn.filesPerCommit.historic

Counts the number of files per commit to find out about the separation of concerns in the architecture or in the tasks the programmers perform. This metric is used further downstream.

Back to top


rascal.generic.churn.churnPerCommit.historic

Count churn. Churn is the number lines added or deleted. We measure this per commit because the commit is a basic unit of work for a programmer. This metric computes a table per commit for today and is not used for comparison between projects. It is used further downstream to analyze activity.

Back to top


rascal.generic.churn.churnPerCommitter.historic

Count churn per committer: the number of lines of code added and deleted. It zooms in on the single committer producing a table which can be used for downstream processing.

Back to top


rascal.generic.churn.commitsInTwoWeeks.historic

Churn in the last two weeks: aggregates the number of commits over a 14-day sliding window.

Back to top


rascal.generic.churn.churnInTwoWeeks.historic

Churn in the last two weeks: aggregates the lines of code added and deleted over a 14-day sliding window.

Back to top


org.eclipse.scava.metricprovider.historic.commits.messages.topics

This metric computes the labels of topic clusters in commits messages pushed by users in the last 30 days

Additional Information :

Visualisation Output Information :

Back to top


Historic Metric Providers for Documentation

The following Historic Metric Providers are associated with documentation analyses.

Back to top


org.eclipse.scava.metricprovider.historic.documentation.readability

Historic metric that stores the evolution of the documentation readability. The higher the readability score, the harder to understand the text.

Additional Information :

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.documentation.sentiment

Historic metric that stores the evolution of the documentation sentiment polarity. Sentiment score could be closer to -1 (negative sentiment), 0 (neutral sentiment) or +1 (positive sentiment)

Additional Information :

Visualisation Output Information :

Back to top


Historic Metric Providers for Generic Source Code

These metrics are related to the source code of analyzed projects, regardless of the language(s) they are written in.

Back to top


trans.rascal.clones.cloneLOCPerLanguage.historic

Lines of code in Type I clones larger than 6 lines, per language. A Type I clone is a literal clone. A large number of literal clones is considered to be bad. This metric is not easily compared between systems because it is not size normalized yet. We use it for further processing downstream. You can analyze the trend over time using this metric.

Back to top


trans.rascal.readability.fileReadabilityQuartiles.historic

We measure file readability by counting exceptions to common usage of whitespace in source code, such as spaces after commas. The quartiles represent how many of the files have how many of these deviations. A few deviations per file is ok, but many files with many deviations indicates a lack of attention to readability.

Back to top


trans.rascal.comments.commentLinesPerLanguage.historic

Number of lines containing comments per language (excluding headers). The balance between comments and code indicates understandability. Too many comments are often not maintained and may lead to confusion, not enough means the code lacks documentation explaining its intent. This is a basic fact collection metric which is used further downstream.

Back to top


trans.rascal.comments.commentedOutCodePerLanguage.historic

Lines of commented out code per file uses heuristics (frequency of certain substrings typically used in code and not in natural language) to find out how much source code comments are actually commented out code. Commented out code is, in large quantities is a quality contra-indicator.

Back to top


trans.rascal.comments.headerPercentage.historic

Percentage of files with headers is an indicator for the amount of files which have been tagged with a copyright statement (or not). If the number is low this indicates a problem with the copyright of the program. Source files without a copyright statement are not open-source, they are owned, in principle, by the author and may not be copied without permission. Note that the existence of a header does not guarantee the presence of an open-source license, but its absence certainly is telling.

Back to top


trans.rascal.LOC.genericLOCoverFiles.historic

We find out how evenly the code is spread over files. The number should be quite stable over time. A jump in this metric indicates a large change in the code base. If the code is focused in only a few very large files then this may be a contra-indicator for quality.

Back to top


trans.rascal.LOC.locPerLanguage.historic

Physical lines of code simply counts the number of newline characters (OS independent) in a source code file. We accumulate this number per programming language. The metric can be used to compare the volume between two systems and to assess in which programming language the bulk of the code is written.

Back to top


Historic Metric Providers for Java code

These metrics are related to the Java source code of analyzed projects.

Back to top


style.filesWithErrorProneness.historic

Percentage of files with error proneness

Back to top


style.filesWithUnderstandabilityIssues.historic

Percentage of files with understandability issues. This is a basic metric which can not be easily compared between projects.

Back to top


style.spreadOfStyleViolations.historic

Between 0 and 1 how evenly spread are the style violations. This metric makes sense if there are more than 5 files in a project and can be compared between projects as well. If problems are widespread this may be a quality contra-indicator, while a localized problem could be easily fixed.

Back to top


style.filesWithInefficiencies.historic

Percentage of files with inefficiencies

Back to top


style.filesWithStyleViolations.historic

Percentage of files with style violations

Back to top


style.spreadOfUnderstandabilityIssues.historic

Between 0 and 1 how evenly spread are the understandability issues. This metric makes sense if there are more than 5 files in a project and can be compared between projects as well. If problems are widespread this may be a quality contra-indicator, while a localized problem could be easily fixed.

Back to top


style.spreadOfInefficiencies.historic

Between 0 and 1 how evenly spread are the style violations which indicate inefficiencies. This metric makes sense if there are more than 5 files in a project and can be compared between projects as well. If problems are widespread this may be a quality contra-indicator, while a localized problem could be easily fixed.

Back to top


style.spreadOfErrorProneness.historic

Between 0 and 1 how evenly spread are the style violations which indicate error proneness. This metric makes sense if there are more than 5 files in a project and can be compared between projects as well. If problems are widespread this may be a quality contra-indicator, while a localized problem could be easily fixed.

Back to top


rascal.testability.java.TestOverPublicMethods.historic

Number of JUnit tests averaged over the total number of public methods. Ideally all public methods are tested. With this number we compute how far from the ideal situation the project is.

Back to top


rascal.testability.java.NumberOfTestMethods.historic

Number of JUnit test methods

Back to top


rascal.testability.java.TestCoverage.historic

This is a static over-estimation of test coverage: which code is executed in the system when all JUnit test cases are executed? We approximate this by using the static call graphs and assuming every method which can be called, will be called. This leads to an over-approximation, as compared to a dynamic code coverage analysis, but the static analysis does follow the trend and a low code coverage here is an good indicator for a lack in testing effort for the project.

Back to top


trans.rascal.OO.java.Ca-Java-Quartiles.historic

Afferent coupling quartiles (Java)

Back to top


trans.rascal.OO.java.CF-Java.historic

Coupling factor (Java)

Back to top


trans.rascal.OO.java.DAC-Java-Quartiles.historic

Data abstraction coupling quartiles (Java)

Back to top


trans.rascal.OO.java.MPC-Java-Quartiles.historic

Message passing coupling quartiles (Java)

Back to top


trans.rascal.OO.java.PF-Java.historic

Polymorphism factor (Java)

Back to top


trans.rascal.OO.java.RFC-Java-Quartiles.historic

Response for class quartiles (Java)

Back to top


trans.rascal.OO.java.I-Java-Quartiles.historic

Instability quartiles (Java)

Back to top


trans.rascal.OO.java.MIF-Java-Quartiles.historic

Method inheritance factor quartiles (Java)

Back to top


trans.rascal.OO.java.MHF-Java.historic

Method hiding factor (Java)

Back to top


trans.rascal.OO.java.AHF-Java.historic

Attribute hiding factor (Java)

Back to top


trans.rascal.OO.java.LCOM-Java-Quartiles.historic

Lack of cohesion in methods quartiles (Java)

Back to top


trans.rascal.OO.java.A-Java.historic

Abstractness (Java)

Back to top


trans.rascal.OO.java.DIT-Java-Quartiles.historic

Depth of inheritance tree quartiles (Java)

Back to top


trans.rascal.OO.java.TCC-Java-Quartiles.historic

Tight class cohesion quartiles (Java)

Back to top


trans.rascal.OO.java.LCOM4-Java-Quartiles.historic

Lack of cohesion in methods 4 quartiles (Java)

Back to top


trans.rascal.OO.java.SR-Java.historic

Specialization ratio (Java)

Back to top


trans.rascal.OO.java.AIF-Java-Quartiles.historic

Attribute inheritance factor quartiles (Java)

Back to top


trans.rascal.OO.java.NOC-Java-Quartiles.historic

Number of children quartiles (Java)

Back to top


trans.rascal.OO.java.RR-Java.historic

Reuse ratio (Java)

Back to top


trans.rascal.OO.java.LCC-Java-Quartiles.historic

Loose class cohesion quartiles (Java)

Back to top


trans.rascal.OO.java.Ce-Java-Quartiles.historic

Efferent coupling quartiles (Java)

Back to top


trans.rascal.OO.java.NOM-Java-Quartiles.historic

Number of methods quartiles (Java)

Back to top


trans.rascal.OO.java.NOA-Java-Quartiles.historic

Number of attributes quartiles (Java)

Back to top


trans.rascal.OO.java.CBO-Java-Quartiles.historic

Coupling between objects quartiles (Java)

Back to top


trans.rascal.advancedfeatures.java.AdvancedLanguageFeaturesJavaQuartiles.historic

Quartiles of counts of advanced Java features (wildcards, union types and anonymous classes). The numbers indicate the thresholds that delimit the first 25%, 50% and 75% of the data as well as the maximum and minumum values.

Back to top


trans.rascal.CC.java.CCHistogramJava.historic

Number of Java methods per CC risk factor, counts the number of methods which are in a low, medium or high risk factor. The histogram can be compared between projects to indicate which is probably easier to maintain on a method-by-method basis.

Back to top


trans.rascal.CC.java.CCOverJavaMethods.historic

Calculates how cyclomatic complexity is spread over the methods of a system. If high CC is localized, then this may be easily fixed but if many methods have high complexity, then the project may be at risk. This metric is good to compare between projects.

Historic Metric Providers for OSGi Dependencies

These metrics are related to OSGi dependencies declared in MANIFEST.MF files.

Back to top


trans.rascal.dependency.osgi.numberOSGiBundleDependencies.historic

Retrieves the number of OSGi bunlde dependencies (i.e. Require-Bundle dependencies).

Historic Metric Providers for Maven Dependencies

These metrics are related to Maven dependencies declared in pom.xml files.

Back to top


trans.rascal.dependency.maven.numberMavenDependencies.historic

Retrieves the number of Maven dependencies.

Back to top


Historic Metric Providers for Docker Dependencies

The following Historic Metric Provider is associated with Docker Dependencies

Back to top


org.eclipse.scava.metricprovider.historic.configuration.docker.dependencies

This metric computes the number of the dependencies that are defined in the Dockerfiles of a project per day. It also computes additional information such as the number of each version of the dependencies (image/package).

Visualisation Output Information :

Back to top


Historic Metric Providers for Puppet Dependencies

The following Historic Metric Provider is associated with Puppet Dependencies

Back to top


org.eclipse.scava.metricprovider.historic.configuration.puppet.dependencies

This metric computes the number of the dependencies that are defined in the Puppet manifests of a project per day.

Visualisation Output Information :

Back to top


Historic Metric Providers for Docker Smells

The following Historic Metric Provider is associated with Docker Smells

Back to top


org.eclipse.scava.metricprovider.historic.configuration.docker.smells

This metric computes the number of the smells that are detected in the Dockerfiles of a project per day. It also computes additional information such as the number of each type of the smell.

Visualisation Output Information :

Back to top


Historic Metric Providers for Puppet Smells

The following Historic Metric Providers are associated with Puppet Smells

Back to top


org.eclipse.scava.metricprovider.historic.configuration.puppet.designsmells

This metric computes the number of the design smells that are detected in the Puppet manifests of a project per day. It also computes additional information such as the number of each type of the smell.

Visualisation Output Information :

Back to top


org.eclipse.scava.metricprovider.historic.configuration.puppet.implementationsmells

This metric computes the number of the implementation smells that are detected in the Puppet manifests of a project per day. It also computes additional information such as the number of each type of the smell.

Visualisation Output Information :

Back to top


Transient Metric Providers

Transient metrics are used to calculate heuristics that are associated with a particular period in time, i.e. a single day. Transient Metrics are stored temporarily within the knowledge base and their output is passed as parameters in the calculation of other transient and historic metrics. Depending on the complexity, a transient metric can depend on the output from other tools, other transient metircs or have no dependencies at all.

Back to top


Transient Metric Providers for Bug Trackers

The following Transient Metric Providers are associated with Issue trackers.

Back to top


org.eclipse.scava.metricprovider.trans.bugs.activeusers

This metric computes the number of users that submitted new bug comments in the last 15 days, for each bug tracker.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.bugs.bugmetadata

This metric computes various metadata in bug header, i.e. priority, status, operation system and resolution. Other values computed by this metric includes average sentiment, content class and requests/replies.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.bugs.comments

This metric computes the number of bug comments, per bug tracker.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.bugs.contentclasses

This metric computes the frequency and percentage of content Classes in bug comments, per bug tracker.

Additional Information :

Note : classLabel could be one of the content classes shown in the hierarchical tree structure below. Where a node consists of sub-trees of children, only the child nodes are considered as classLabel. For example, bug comments of type 1. Clarification can be labelled as either 1.1 or 1.2. A node without sub-trees such as 2. Suggestion of solution is considered classLabel on its own.

Content Class Labels

Back to top


org.eclipse.scava.metricprovider.trans.bugs.dailyrequestsreplies

This metric computes the number of bug comments, including those regarded as requests and replies each day, per bug tracker.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.bugs.emotions

This metric computes the emotional dimensions in bug comments, per bug tracker. There are 6 emotion labels (anger, fear, joy, sadness, love, surprise).

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.bugs.hourlyrequestsreplies

This metric computes the number of bug comments, including those regarded as requests and replies, every hour of the day, per bug tracker.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.bugs.migrationissues

This metric detects migration issues in Bug Tracking Systems.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.bugs.migrationissuesmaracas

This metric detects migration issues in Bug Tracking Systems along with data from Maracas.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.bugs.newbugs

This metric computes the number of new bugs over time, per bug tracker.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.bugs.patches

This metric computes the number of patches submitted by the community (users) for each bug.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.bugs.references

This metrics search for references of commits or bugs within comments comming from bugs comments.

Additional Information :

Note : When this metric is used on GitHub, it should be noted that some references of bugs will be in fact pull requests. The reason is that GitHub considers pull requests equally as issues.

Back to top


org.eclipse.scava.metricprovider.trans.bugs.requestsreplies

This metric computes for each bug, whether it was answered. If so, it computes the time taken to respond.

Additional Information :

Back to top


Transient Metric Providers for Newsgroups and Forums

The following Transient Metric Providers are associated with communication channels in general, either newsgroups or forums. Despite the name of the metrics are newsgroups, all the metrics are valid for communication channels.

Back to top


org.eclipse.scava.metricprovider.trans.newsgroups.activeusers

This metric computes the number of users that submitted news comments in the last 15 days, per newsgroup.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.newsgroups.articles

This metric computes the number of articles, per newsgroup.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.newsgroups.contentclasses

This metric computes the content classes in newgroup articles, per newsgroup.

Additional Information :

Note : classLabel could be one of the content classes shown in the hierarchical tree structure below. Where a node consists of sub-trees of children, only the child nodes are considered as classLabel. For example, articles of type 1. Clarification can be labelled as either 1.1 or 1.2. A node without sub-trees such as 2. Suggestion of solution is considered classLabel on its own.

Content Class Labels

Back to top


org.eclipse.scava.metricprovider.trans.newsgroups.dailyrequestsreplies

This metric computes the number of articles, including those regarded as requests and replies for each day of the week, per newsgroup.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.newsgroups.emotions

This metric computes the emotional dimensions in newsgroup articles, per newsgroup. There are 6 emotion labels (anger, fear, joy, sadness, love, surprise).

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.newsgroups.hourlyrequestsreplies

This metric computes the number of articles, including those regarded as requests and replies for each hour of the day, per newsgroup.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.newsgroups.migrationissues

This metric detects migration issues in Communication Channels articles.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.newsgroups.migrationissuesmaracas

This metric detects migration issues in Newsgroups along with data from Maracas.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.newsgroups.sentiment

The metric computes the average sentiment, including sentiment at the beginning and end of each thread, per newsgroup. Sentiment polarity value could be closer to -1 (negative sentiment), 0 (neutral sentiment) or +1 (positive sentiment)

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.newsgroups.threads

This metric holds information for assigning newsgroup articles to threads. The threading algorithm is executed from scratch every time.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.newsgroups.threadsrequestsreplies

The metric computes for each thread whether it is answered. If so, it computes the response time.

Additional Information :

Back to top


Transient Metric Providers for Documentation

The following Transient Metric Providers are associated with documentation analyses.

Back to top


org.eclipse.scava.metricprovider.trans.documentation

This metric process the files returned from the documentation readers and extracts the body (in format HTML or text)

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.documentation.classification

This metric determines which type of documentation is present. The possible types are: API, Development, Installation, Started, User.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.documentation.detectingcode

This metric process the plain text from documentation and detects the portions corresponding to code and natural language

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.documentation.plaintext

This metric process the body of each documentation entry and extracts the plain text

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.documentation.readability

This metric calculates the readability of each documentation entry. The higher the score, the more difficult to understand the text.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.documentation.sentiment

This metric calculates the sentiment polarity of each documentation entry. Sentiment polarity value could be closer to -1 (negative sentiment), 0 (neutral sentiment) or +1 (positive sentiment)

Additional Information :

Back to top


Transient Metric Providers for Natural Language Processing

The following Transient Metric Providers are associated with Natural Language Processing tools.

Back to top


org.eclipse.scava.metricprovider.trans.detectingcode

This metric determines the parts of a bug comment or a newsgroup article that contains code or natural language.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.emotionclassification

This metric computes the emotions present in each bug comment, newsgroup article or forum post. There are 6 emotion labels (anger, fear, joy, sadness, love, surprise).

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.plaintextprocessing

This metric preprocess each bug comment, newsgroup article or forum post into a split plain text format.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.requestreplyclassification

This metric computes if a bug comment, newsgroup article or forum post is a request of a reply.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.sentimentclassification

This metric computes the sentiment of each bug comment, newsgroup article or forum post. Sentiment can be -1 (negative sentiment), 0 (neutral sentiment) or 1 (positive sentiment).

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.severityclassification

This metric computes the severity of each bug comment, newsgroup article or forum post. Severity could be blocker, critical, major, minor, enhancement, normal). For bug comments, there is an additional severity level called unknown. A bug severity is considered unknown if there is not enough information for the classifier to make a decision. For example, an unanswered bug with no user comment to analyse.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.topics

This metric computes topic clusters for each bug comment, newsgroup article or forum post in the last 30 days.

Additional Information :


Transient Metric Providers for Commits and Committers

These metrics are related to the commits and committers of a project.

Back to top


org.eclipse.scava.metricprovider.trans.commits.message.plaintext

This metric preprocess each commit message to get a split plain text version.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.trans.commits.messagereferences

This metrics search for references of commits or bugs within the messages of commits. In order to detect bugs references, it is necessary to use at the same time one Bug Tracker, as the retrieval of references are based on patterns defined by bug trackers. If multiple or zero Bug Trackers are defined in the project, the metric will only search for commits (alphanumeric strings of 40 characters).

Additional Information :

Note : When this metric is used on GitHub, it should be noted that some references of bugs will be in fact pull requests. The reason is that GitHub considers pull requests equally as issues.

Back to top


org.eclipse.scava.metricprovider.trans.commits.message.topics

This metric computes topic clusters for each commit message.

Additional Information :

Back to top


trans.rascal.activecommitters.activeCommitters

A list of committers who have been active the last two weeks. This metric is meant for downstream processing.

Back to top


trans.rascal.activecommitters.committersoverfile

Calculates the gini coefficient of committers per file

Back to top


trans.rascal.activecommitters.countCommittersPerFile

Count the number of committers that have touched a file.

Back to top


trans.rascal.activecommitters.firstLastCommitDatesPerDeveloper

Collects per developer the first and last dates on which he or she contributed code. This basic metric is used downstream for other metrics, but it is also used to drill down on the membership of specific individuals of the development team.

Back to top


trans.rascal.activecommitters.developmentTeam

Lists the names of people who have been contributing code at least once in the history of the project.

Back to top


trans.rascal.activecommitters.percentageOfWeekendCommits

Percentage of commits made during the weekend

Back to top


trans.rascal.activecommitters.maximumActiveCommittersEver

What is the maximum number of committers who have been active together in any two week period?

Back to top


trans.rascal.activecommitters.developmentTeamEmails

Lists the names of people who have been contributing code at least once in the history of the project.

Back to top


trans.rascal.activecommitters.developmentDomainNames

Lists the domain names of email addresses of developers if such information is present.

Back to top


trans.rascal.activecommitters.committersPerFile

Register which committers have contributed to which files

Back to top


trans.rascal.activecommitters.longerTermActiveCommitters

Committers who have been active the last 12 months. This metric is meant for downstream processing.

Back to top


trans.rascal.activecommitters.commitsPerDeveloper

The number of commits per developer indicates not only the volume of the contribution of an individual but also the style in which he or she commits, when combined with other metrics such as churn. Few and big commits are different from many small commits. This metric is used downstream by other metrics as well.

Back to top


trans.rascal.activecommitters.committersAge

Measures in days the amount of time between the first and last contribution of each developer.

Back to top


trans.rascal.activecommitters.committersToday

Who have been active today?

Back to top


trans.rascal.activecommitters.projectAge

Age of the project (nr of days between first and last commit)

Back to top


trans.rascal.activecommitters.commitsPerWeekDay

On which day of the week do commits take place?

Back to top


trans.rascal.activecommitters.committersEmailsToday

Who have been active today?

Back to top


trans.rascal.activecommitters.sizeOfDevelopmentTeam

How many people have ever contributed code to this project?

Back to top


trans.rascal.activecommitters.numberOfActiveCommittersLongTerm

Number of long time active committers over time (active in last year). This measures a smooth window of one year, where every day we report the number of developers active in the previous 365 days.

Back to top


trans.rascal.activecommitters.numberOfActiveCommitters

Number of active committers over time (active in last two weeks). This measures a smooth window of two weeks, where every day we report the number of developers in the previous 14 days.

Back to top


rascal.generic.churn.commitsToday

Counts the number of commits made today.

Back to top


rascal.generic.churn.churnToday

Counts the churn for today: the total number of lines of code added and deleted. This metric is used further downstream to analyze trends.

Back to top


rascal.generic.churn.churnPerCommitInTwoWeeks

The ratio between the churn and the number of commits indicates how large each commit is on average. We compute this as a sliding average over two weeks which smoothens exceptions and makes it possible to see a trend historically. Commits should not be to big all the time, because that would indicate either that programmers are not focusing on well-defined tasks or that the system architecture does not allow for separation of concerns.

Back to top


rascal.generic.churn.churnActivity

Churn in the last two weeks: collects the lines of code added and deleted over a 14-day sliding window.

Back to top


rascal.generic.churn.commitActivity

Number of commits in the last two weeks: collects commit activity over a 14-day sliding window.

Back to top


rascal.generic.churn.coreCommittersChurn

Find out about the committers what their total number of added and deleted lines for this system.

Back to top


rascal.generic.churn.filesPerCommit

Counts the number of files per commit to find out about the separation of concerns in the architecture or in the tasks the programmers perform. This metric is used further downstream.

Back to top


rascal.generic.churn.churnPerCommit

Count churn. Churn is the number lines added or deleted. We measure this per commit because the commit is a basic unit of work for a programmer. This metric computes a table per commit for today and is not used for comparison between projects. It is used further downstream to analyze activity.

Back to top


rascal.generic.churn.churnPerCommitter

Count churn per committer: the number of lines of code added and deleted. It zooms in on the single committer producing a table which can be used for downstream processing.

Back to top


rascal.generic.churn.churnPerFile

Churn per file counts the number of files added and deleted for a single file. This is a basic metric to indicate hotspots in the design of the system which is changed often. This metric is used further downstream.

Back to top


rascal.generic.churn.commitsInTwoWeeks

Churn in the last two weeks: aggregates the number of commits over a 14-day sliding window.

Back to top


rascal.generic.churn.churnInTwoWeeks

Churn in the last two weeks: aggregates the lines of code added and deleted over a 14-day sliding window.

Back to top


Transient Metric Providers for Generic Source Code

These metrics are related to the source code of analyzed projects, regardless of the language(s) they are written in.

Back to top


trans.rascal.readability.fileReadability

Code readability per file, measured by use of whitespace measures deviations from common usage of whitespace in source code, such as spaces after commas. This is a basic collection metric which is used further downstream.

Back to top


trans.rascal.readability.fileReadabilityQuartiles

We measure file readability by counting exceptions to common usage of whitespace in source code, such as spaces after commas. The quartiles represent how many of the files have how many of these deviations. A few deviations per file is ok, but many files with many deviations indicates a lack of attention to readability.

Back to top


trans.rascal.comments.headerCounts

In principle it is expected for the files in a project to share the same license. The license text in the header of each file may differ slightly due to different copyright years and or lists of contributors. The heuristic allows for slight differences. The metric produces the number of different types of header files found. A high number is a contra-indicator, meaning either a confusing licensing scheme or the source code of many different projects is included in the code base of the analyzed system.

Back to top


trans.rascal.comments.commentedOutCode

Lines of commented out code per file uses heuristics (frequency of certain substrings typically used in code and not in natural language) to find out how much source code comments are actually commented out code. Commented out code is, in large quantities is a quality contra-indicator.

Back to top


trans.rascal.comments.commentLOC

Number of lines containing comments per file is a basic metric used for downstream processing. This metric does not consider the difference between natural language comments and commented out code.

Back to top


trans.rascal.comments.commentLinesPerLanguage

Number of lines containing comments per language (excluding headers). The balance between comments and code indicates understandability. Too many comments are often not maintained and may lead to confusion, not enough means the code lacks documentation explaining its intent. This is a basic fact collection metric which is used further downstream.

Back to top


trans.rascal.comments.commentedOutCodePerLanguage

Lines of commented out code per file uses heuristics (frequency of certain substrings typically used in code and not in natural language) to find out how much source code comments are actually commented out code. Commented out code is, in large quantities is a quality contra-indicator.

Back to top


trans.rascal.comments.headerLOC

Header size per file is a basic metric counting the size of the comment at the start of each file. It is used for further processing downstream.

Back to top


trans.rascal.comments.matchingLicenses

We match against a list of known licenses to find out which are used in the current project

Back to top


trans.rascal.comments.headerPercentage

Percentage of files with headers is an indicator for the amount of files which have been tagged with a copyright statement (or not). If the number is low this indicates a problem with the copyright of the program. Source files without a copyright statement are not open-source, they are owned, in principle, by the author and may not be copied without permission. Note that the existence of a header does not guarantee the presence of an open-source license, but its absence certainly is telling.

Back to top


trans.rascal.LOC.genericLOC

Physical lines of code simply counts the number of newline characters (OS independent) in a source code file. The metric can be used to compare the volume between two systems.

Back to top


trans.rascal.LOC.genericLOCoverFiles

We find out how evenly the code is spread over files. The number should be quite stable over time. A jump in this metric indicates a large change in the code base. If the code is focused in only a few very large files then this may be a contra-indicator for quality.

Back to top


trans.rascal.LOC.locPerLanguage

Physical lines of code simply counts the number of newline characters (OS independent) in a source code file. We accumulate this number per programming language. The metric can be used to compare the volume between two systems and to assess in which programming language the bulk of the code is written.

Back to top


trans.rascal.clones.cloneLOCPerLanguage

Lines of code in Type I clones larger than 6 lines, per language. A Type I clone is a literal clone. A large number of literal clones is considered to be bad. This metric is not easily compared between systems because it is not size normalized yet. We use it for further processing downstream. You can analyze the trend over time using this metric.


Transient Metric Providers for Java Code

These metrics are related to the Java source code of analyzed projects.

Back to top


style.filesWithErrorProneness

Percentage of files with error proneness

Back to top


style.understandability

Percentage of the projects files with coding style violations which indicate the code may be hard to read and understand, but not necessarily more error prone.

Back to top


style.inefficiencies

Percentage of the projects files with coding style violations which indicate common inefficient ways of doing things in Java.

Back to top


style.filesWithUnderstandabilityIssues

Percentage of files with understandability issues. This is a basic metric which can not be easily compared between projects.

Back to top


style.errorProneness

Percentage of the projects files with coding style violations which indicate error prone code. This is a basic metric which collects per file all the style violations, recording the line number and the kind of style violation. Each kind of violation is grouped into a category. The resulting table is hard to interpret manually and can not be compared between projects. Other metrics further downstream do aggregate this information.

Back to top


style.spreadOfStyleViolations

Between 0 and 1 how evenly spread are the style violations. This metric makes sense if there are more than 5 files in a project and can be compared between projects as well. If problems are widespread this may be a quality contra-indicator, while a localized problem could be easily fixed.

Back to top


style.filesWithInefficiencies

Percentage of files with inefficiencies

Back to top


style.filesWithStyleViolations

Percentage of files with style violations

Back to top


style.spreadOfUnderstandabilityIssues

Between 0 and 1 how evenly spread are the understandability issues. This metric makes sense if there are more than 5 files in a project and can be compared between projects as well. If problems are widespread this may be a quality contra-indicator, while a localized problem could be easily fixed.

Back to top


style.spreadOfInefficiencies

Between 0 and 1 how evenly spread are the style violations which indicate inefficiencies. This metric makes sense if there are more than 5 files in a project and can be compared between projects as well. If problems are widespread this may be a quality contra-indicator, while a localized problem could be easily fixed.

Back to top


style.styleViolations

This is a basic metric which collects per file all the style violations, recording the line number and the kind of style violation. Each kind of violation is grouped into a category. The resulting table is hard to interpret manually and can not be compared between projects. Other metrics further downstream do aggregate this information.

Back to top


style.spreadOfErrorProneness

Between 0 and 1 how evenly spread are the style violations which indicate error proneness. This metric makes sense if there are more than 5 files in a project and can be compared between projects as well. If problems are widespread this may be a quality contra-indicator, while a localized problem could be easily fixed.

Back to top


rascal.testability.java.TestOverPublicMethods

Number of JUnit tests averaged over the total number of public methods. Ideally all public methods are tested. With this number we compute how far from the ideal situation the project is.

Back to top


rascal.testability.java.NumberOfTestMethods

Number of JUnit test methods

Back to top


rascal.testability.java.TestCoverage

This is a static over-estimation of test coverage: which code is executed in the system when all JUnit test cases are executed? We approximate this by using the static call graphs and assuming every method which can be called, will be called. This leads to an over-approximation, as compared to a dynamic code coverage analysis, but the static analysis does follow the trend and a low code coverage here is an good indicator for a lack in testing effort for the project.

Back to top


trans.rascal.OO.java.MIF-Java

Method inheritance factor (Java)

Back to top


trans.rascal.OO.java.Ca-Java-Quartiles

Afferent coupling quartiles (Java)

Back to top


trans.rascal.OO.java.DAC-Java

Data abstraction coupling (Java)

Back to top


trans.rascal.OO.java.CF-Java

Coupling factor (Java)

Back to top


trans.rascal.OO.java.I-Java

Instability (Java)

Back to top


trans.rascal.OO.java.DAC-Java-Quartiles

Data abstraction coupling quartiles (Java)

Back to top


trans.rascal.OO.java.MPC-Java-Quartiles

Message passing coupling quartiles (Java)

Back to top


trans.rascal.OO.java.NOM-Java

Number of methods (Java)

Back to top


trans.rascal.OO.java.LCOM-Java

Lack of cohesion in methods (Java)

Back to top


trans.rascal.OO.java.CBO-Java

Coupling between objects (Java)

Back to top


trans.rascal.OO.java.Ce-Java

Efferent coupling (Java)

Back to top


trans.rascal.OO.java.PF-Java

Polymorphism factor (Java)

Back to top


trans.rascal.OO.java.RFC-Java-Quartiles

Response for class quartiles (Java)

Back to top


trans.rascal.OO.java.I-Java-Quartiles

Instability quartiles (Java)

Back to top


trans.rascal.OO.java.RFC-Java

Response for class (Java)

Back to top


trans.rascal.OO.java.LCC-Java

Loose class cohesion (Java)

Back to top


trans.rascal.OO.java.MIF-Java-Quartiles

Method inheritance factor quartiles (Java)

Back to top


trans.rascal.OO.java.DIT-Java

Depth of inheritance tree (Java)

Back to top


trans.rascal.OO.java.MHF-Java

Method hiding factor (Java)

Back to top


trans.rascal.OO.java.TCC-Java

Tight class cohesion (Java)

Back to top


trans.rascal.OO.java.AHF-Java

Attribute hiding factor (Java)

Back to top


trans.rascal.OO.java.LCOM-Java-Quartiles

Lack of cohesion in methods quartiles (Java)

Back to top


trans.rascal.OO.java.Ca-Java

Afferent coupling (Java)

Back to top


trans.rascal.OO.java.A-Java

Abstractness (Java)

Back to top


trans.rascal.OO.java.DIT-Java-Quartiles

Depth of inheritance tree quartiles (Java)

Back to top


trans.rascal.OO.java.TCC-Java-Quartiles

Tight class cohesion quartiles (Java)

Back to top


trans.rascal.OO.java.LCOM4-Java-Quartiles

Lack of cohesion in methods 4 quartiles (Java)

Back to top


trans.rascal.OO.java.LCOM4-Java

Lack of cohesion in methods 4 (Java)

Back to top


trans.rascal.OO.java.SR-Java

Specialization ratio (Java)

Back to top


trans.rascal.OO.java.AIF-Java-Quartiles

Attribute inheritance factor quartiles (Java)

Back to top


trans.rascal.OO.java.NOC-Java-Quartiles

Number of children quartiles (Java)

Back to top


trans.rascal.OO.java.NOC-Java

Number of children (Java)

Back to top


trans.rascal.OO.java.AIF-Java

Attribute inheritance factor (Java)

Back to top


trans.rascal.OO.java.RR-Java

Reuse ratio (Java)

Back to top


trans.rascal.OO.java.LCC-Java-Quartiles

Loose class cohesion quartiles (Java)

Back to top


trans.rascal.OO.java.NOA-Java

Number of attributes (Java)

Back to top


trans.rascal.OO.java.Ce-Java-Quartiles

Efferent coupling quartiles (Java)

Back to top


trans.rascal.OO.java.NOM-Java-Quartiles

Number of methods quartiles (Java)

Back to top


trans.rascal.OO.java.NOA-Java-Quartiles

Number of attributes quartiles (Java)

Back to top


trans.rascal.OO.java.CBO-Java-Quartiles

Coupling between objects quartiles (Java)

Back to top


trans.rascal.OO.java.MPC-Java

Message passing coupling (Java)

Back to top


trans.rascal.LOC.java.LOCoverJavaClass

The distribution of physical lines of code over Java classes, interfaces and enums explains how complexity is distributed over the design elements of a system.

Back to top


trans.rascal.advancedfeatures.java.AdvancedLanguageFeaturesJavaQuartiles

Quartiles of counts of advanced Java features (wildcards, union types and anonymous classes). The numbers indicate the thresholds that delimit the first 25%, 50% and 75% of the data as well as the maximum and minumum values.

Back to top


trans.rascal.advancedfeatures.java.AdvancedLanguageFeaturesJava

Usage of advanced Java features (wildcards, union types and anonymous classes), reported per file and line number of the occurrence. This metric is for downstream processing by other metrics.

Back to top


trans.rascal.CC.java.CCHistogramJava

Number of Java methods per CC risk factor, counts the number of methods which are in a low, medium or high risk factor. The histogram can be compared between projects to indicate which is probably easier to maintain on a method-by-method basis.

Back to top


trans.rascal.CC.java.CCOverJavaMethods

Calculates how cyclomatic complexity is spread over the methods of a system. If high CC is localized, then this may be easily fixed but if many methods have high complexity, then the project may be at risk. This metric is good to compare between projects.

Back to top


trans.rascal.CC.java.CCJava

Cyclomatic complexity is a measure of the number of unique control flow paths in the methods of a class. This indicates how many different test cases you would need to test the method. A high number indicates also a lot of work to understand the method. This metric is a basic metric for further processing downstream. It is not easily compared between projects.

Back to top


trans.rascal.CC.java.WMCJava

Cyclomatic complexity is a measure of the number of unique control flow paths in the methods of a class. This indicates how many different test cases you would need to test the method. A high number indicates also a lot of work to understand the method. The weighted method count for a class is the sum of the cyclomatic complexity measures of all methods in the class. This metric is a basic metric for further processing downstream. It is not easily compared between projects.

Back to top


Transient Metric Providers for OSGi Dependencies

These metrics are related to OSGi dependencies declared in MANIFEST.MF files.

Back to top


trans.rascal.dependency.numberRequiredPackagesInSourceCode

Retrieves the number of required packages found in the project source code.

Back to top


trans.rascal.dependency.osgi.allOSGiPackageDependencies

Retrieves all the OSGi package dependencies (i.e. Import-Package and DynamicImport-Package dependencies).

Back to top


trans.rascal.dependency.osgi.unversionedOSGiRequiredBundles

Retrieves the set of unversioned OSGi required bundles (declared in the Require-Bundle header). If returned value != {} there is a smell in the Manifest.

Back to top


trans.rascal.dependency.osgi.unusedOSGiImportedPackages

Retrieves the set of unused OSGi imported packages. If set != {} then developers are importing more packages than needed (smell).

Back to top


trans.rascal.dependency.osgi.numberOSGiSplitImportedPackages

Retrieves the number of split imported packages. If returned value > 0 there is a smell in the Manifest.

Back to top


trans.rascal.dependency.osgi.ratioUnusedOSGiImportedPackages

Retrieves the ratio of unused OSGi imported packages with regards to the whole set of imported and dynamically imported OSGi packages.

Back to top


trans.rascal.dependency.osgi.allOSGiBundleDependencies

Retrieves all the OSGi bunlde dependencies (i.e. Require-Bundle dependencies).

Back to top


trans.rascal.dependency.osgi.unversionedOSGiExportedPackages

Retrieves the set of unversioned OSGi exported packages (declared in the Export-Package header). If returned value != {} there is a smell in the Manifest.

Back to top


trans.rascal.dependency.osgi.numberOSGiSplitExportedPackages

Retrieves the number of split exported packages. If returned value > 0 there is a smell in the Manifest.

Back to top


trans.rascal.dependency.osgi.allOSGiDynamicImportedPackages

Retrieves all the OSGi dynamically imported packages. If returned value != {} a smell exists in the Manifest file.

Back to top


trans.rascal.dependency.osgi.numberOSGiBundleDependencies

Retrieves the number of OSGi bunlde dependencies (i.e. Require-Bundle dependencies).

Back to top


trans.rascal.dependency.osgi.ratioUnversionedOSGiImportedPackages

Retrieves the ratio of unversioned OSGi imported packages.

Back to top


trans.rascal.dependency.osgi.unversionedOSGiImportedPackages

Retrieves the set of unversioned OSGi imported packages (declared in the Import-Package header). If returned value != {} there is a smell in the Manifest.

Back to top


trans.rascal.dependency.osgi.numberOSGiPackageDependencies

Retrieves the number of OSGi package dependencies (i.e. Import-Package and DynamicImport-Package dependencies).

Back to top


trans.rascal.dependency.osgi.ratioUnversionedOSGiRequiredBundles

Retrieves the ratio of unversioned OSGi required bundles.

Back to top


trans.rascal.dependency.osgi.usedOSGiUnimportedPackages

Retrieves the set of used but unimported packages. This metric does not consider packages implicitly imported through the Bundle-Require header. If set != {} then developers may be depending on the execution environment (smell).

Back to top


trans.rascal.dependency.osgi.ratioUnversionedOSGiExportedPackages

Retrieves the ratio of unversioned OSGi exported packages.

Back to top


trans.rascal.dependency.osgi.ratioUsedOSGiImportedPackages

Retrieves the ratio of used imported packages. If ratio == 0.0 all imported packages have been used in the project code.

Back to top


Transient Metric Providers for Maven dependencies

These metrics are related to Maven dependencies declared in pom.xml files.

Back to top


trans.rascal.dependency.numberRequiredPackagesInSourceCode

Retrieves the number of required packages found in the project source code.

Back to top


trans.rascal.dependency.maven.ratioOptionalMavenDependencies

Retrieves the ratio of optional Maven dependencies.

Back to top


trans.rascal.dependency.maven.numberUniqueMavenDependencies

Retrieves the number of unique Maven dependencies.

Back to top


trans.rascal.dependency.maven.allOptionalMavenDependencies

Retrieves all the optional Maven dependencies.

Back to top


trans.rascal.dependency.maven.isUsingTycho

Checks if the current project is a Tycho project.

Back to top


trans.rascal.dependency.maven.numberMavenDependencies

Retrieves the number of Maven dependencies.

Back to top


trans.rascal.dependency.maven.allMavenDependencies

Retrieves all the Maven dependencies.

Back to top


Transient Metric Providers for Docker Dependencies

This metric is related to Docker dependencies declared in Dockerfiles.

Back to top


org.eclipse.scava.metricprovider.trans.configuration.docker.dependencies

Retrieves the names of the dependencies that are declared in the Dockerfiles of a project and additional information such as their version and type.

Back to top


Transient Metric Providers for Puppet Dependencies

This metric is related to Puppet dependencies declared in Puppet manifests.

Back to top


org.eclipse.scava.metricprovider.trans.configuration.puppet.dependencies

Retrieves the names of the dependencies that are declared in the Puppet manifests of a project and additional information such as their version and type.

Back to top


Transient Metric Providers for Docker Smells

This metric is related to Docker smells detected in Dockerfiles.

Back to top


org.eclipse.scava.metricprovider.trans.configuration.docker.smells

Detects the smells in the Dockerfiles of a project and additional information such as their reason, the file and the line that each smells is detected.

Back to top


Transient Metric Providers for Puppet Smells

These metrics are related to Puppet smells detected in Puppet manifests.

Back to top


org.eclipse.scava.metricprovider.trans.configuration.puppet.designsmells

Detects the design smells in the Puppet manifests of a project and additional information such as their reason and the file that each smells is detected.

Back to top


org.eclipse.scava.metricprovider.trans.configuration.puppet.implementationsmells

Detects the implementation smells in the Puppet manifests of a project and additional information such as their reason, the file and the line that each smells is detected.

Back to top


Transient Metric Providers for Docker Antipatterns

This metric is related to Docker antipatterns detected in Dockerfiles.

Back to top


org.eclipse.scava.metricprovider.trans.configuration.docker.antipatterns

Detects the antipatterns in the Dockerfiles of a project and additional information such as their reason, the file and the line that each antipattern is detected and the commit and date that this antipattern is related.

Back to top


Transient Metric Providers for Puppet Antipatterns

These metrics are related to Puppet antipatterns detected in Puppet manifests.

Back to top


org.eclipse.scava.metricprovider.trans.configuration.puppet.designantipatterns

Detects the design antipatterns in the Puppet manifests of a project and additional information such as their reason, the file that each antipattern is detected and the commit and date that this antipattern is related.

Back to top


org.eclipse.scava.metricprovider.trans.configuration.puppet.implementationantipatterns

Detects the implementation antipatterns in the Puppet manifests of a project and additional information such as their reason, the file and the line that each antipattern is detected and the commit and date that this antipattern is related.


Transient Metric Providers for Indexing

These metrics facilitate data indexing unto the platform.

Back to top


Transient Metric Providers for Projects Relations

This metric is related to the relations between projects that are analysed at the platform.

Back to top


org.eclipse.scava.metricprovider.trans.configuration.projects.relations

Detects the relations between projects that are already analysed at the platform by determining if a project is used as dependency by another project.

Back to top


Transient Metric Providers for New Versions

These metrics are related to the new version of the dependencies of the projects that are analysed at the platform.

Back to top


org.eclipse.scava.metricprovider.trans.newversion.docker

Detects the new versions of dependencies of Docker based projects.

Back to top


org.eclipse.scava.metricprovider.trans.newversion.puppet

Detects the new versions of dependencies of Puppet based projects.

Back to top


org.eclipse.scava.metricprovider.trans.newversion.osgi

Detects the new versions of dependencies of OSGi based projects.

Back to top


org.eclipse.scava.metricprovider.trans.newversion.maven

Detects the new versions of dependencies of Maven based projects.

Back to top


org.eclipse.scava.metricprovider.trans.indexing.preparation

This identifies the metric(s) that have been chosen to be executed by the user in preparation for indexing (note: This is required to enable the indexing capabilities of the platform to be dynamic.

Additional Information :

Back to top


org.eclipse.scava.metricprovider.indexing.bugs

This metric prepares and indexes documents relating to bug tracking systems.

Back to top


org.eclipse.scava.metricprovider.indexing.commits

This metric prepares and indexes documents relating to commits.

Back to top


org.eclipse.scava.metricprovider.indexing.communicationchannels

This metric prepares and indexes documents relating to communication channels.

Back to top


org.eclipse.scava.metricprovider.indexing.documentation

This metric prepares and indexes documents relating to documentation.

Back to top


Transient Metric Providers for API

These transient metrics are related to the analysis and evolution of API

Back to top


org.eclipse.scava.metricprovider.trans.migrationissuesmaracas

This metric convert the changes found by Maracas into Regex useful for other metrics.

Additional Information :

Back to top


Factoids

Factoids are plugins used to present data that has been mined and analysed using one or more historic and/or transient metric providers.

Back to top


Factoids for Bug Trackers

These factoids are related to bug tracking systems.

Back to top


org.eclipse.scava.factoid.bugs.channelusage

This plugin generates the factoid regarding usage data for bug trackers. For example, the total number of new bugs, comments or patches per year.

Additional Information :

Back to top


org.eclipse.scava.factoid.bugs.emotion

This plugin generates the factoid regarding emotions for bug trackers. For example, the percentage of positive, negative or surprise emotions expressed. There are 6 emotion labels (anger, fear, joy, sadness, love, surprise). Anger, fear and sadness are considered negative while joy and love are considered positive.

Additional Information :

Back to top


org.eclipse.scava.factoid.bugs.hourly

This plugin generates the factoid regarding hourly statistics for bug trackers. For example, the percentage of bugs, comments etc.

Additional Information :

Back to top


org.eclipse.scava.factoid.bugs.responsetime

This plugin generates the factoid regarding response time for bug trackers. This could be a cummulative average, yearly average etc.

Additional Information :

Back to top


org.eclipse.scava.factoid.bugs.sentiment

This plugin generates the factoid regarding sentiment for bug trackers. For example, the average sentiment in all bug trackers associated to a project. Sentiment score could be closer to -1 (negative sentiment), 0 (neutral sentiment) or +1 (positive sentiment)

Additional Information :

Back to top


org.eclipse.scava.factoid.bugs.severity

This plugin generates the factoid regarding severity for bug trackers. For example, the number of bugs per severity level, the average sentiment for each severity etc. There are 8 severity levels (blocker, critical, major, minor, enhancement, normal, trivial, unknown). A bug severity is considered unknown if there is not enough information for the classifier to make a decision. Also, blocker, critical and major are regarded as serious bugs.

Additional Information :

Back to top


org.eclipse.scava.factoid.bugs.size

This plugin generates the factoid regarding bug size for bug trackers. For example, the cumulative number of bug comments or patches.

Additional Information :

Back to top


org.eclipse.scava.factoid.bugs.status

This plugin generates the factoid regarding bug status for bug trackers. For example, the number of fixed bugs, duplicate bugs etc. There are 7 bug status labels (resolved, nonResolved, fixed, worksForMe, wontFix, invalid and duplicate).

Additional Information :

Back to top


org.eclipse.scava.factoid.bugs.threadlength

This plugin generates the factoid regarding bug thread length for bug trackers. For example, the average length of discussion associated to bugs.

Additional Information :

Back to top


org.eclipse.scava.factoid.bugs.users

This plugin generates the factoid regarding users for bug trackers. For example, the average number of users associated to a project in a bug tracking system.

Additional Information :

Back to top


org.eclipse.scava.factoid.bugs.weekly

This plugin generates the factoid regarding weekly user engagements for bug trackers. For example, the average number of bug comments per week. This can be used to present the most and least busy week in terms of engagement for a particular project.

Additional Information :

Back to top


Factoids for Newsgroups and Forums

These factoids are related to communication channels.

Back to top


org.eclipse.scava.factoid.newsgroups.channelusage

This plugin generates the factoid regarding usage data for newsgroups. For example, the total number of new articles or threads per year.

Additional Information :

Back to top


org.eclipse.scava.factoid.newsgroups.emotion

This plugin generates the factoid regarding emotions for newsgroups, such as the percentage of positive, negative or surprise emotions expressed. There are 6 emotion labels (anger, fear, joy, sadness, love, surprise). Anger, fear and sadness are considered negative while joy and love are considered positive.

Additional Information :

Back to top


org.eclipse.scava.factoid.newsgroups.hourly

This plugin generates the factoid regarding hourly data for newsgroups, such as the percentage of articles, threads etc.

Additional Information :

Back to top


org.eclipse.scava.factoid.newsgroups.responsetime

This plugin generates the factoid regarding response time for newsgroups. This could be a cummulative average, yearly average etc.

Additional Information :

Back to top


org.eclipse.scava.factoid.newsgroups.sentiment

This plugin generates the factoid regarding sentiments for newsgroups. For example, the average sentiment in all newsgroup channel associated to a project. Sentiment score could be closer to -1 (negative sentiment), 0 (neutral sentiment) or +1 (positive sentiment)

Additional Information :

Back to top


org.eclipse.scava.factoid.newsgroups.severity

This plugin generates the factoid regarding severity for newsgroups. For example, the number of articles per severity level, the average sentiment for each severity etc. There are 7 severity levels (blocker, critical, major, minor, enhancement, normal, trivial). Note: blocker, critical and major are regarded as serious bugs.

Additional Information :

Back to top


org.eclipse.scava.factoid.newsgroups.size

This plugin generates the factoid regarding thread or article size for newsgroups. For example, the cummulative number of threads.

Additional Information :

Back to top


org.eclipse.scava.factoid.newsgroups.status

This plugin generates the factoid regarding thread or article status for newsgroups. For example, the number of requests and replies, unanswered threads etc.

Additional Information :

Back to top


org.eclipse.scava.factoid.newsgroups.threadlength

This plugin generates the factoid regarding thread length for newsgroups. For example, the average length of discussion per day, month etc.

Additional Information :

Back to top


org.eclipse.scava.factoid.newsgroups.users

This plugin generates the factoid regarding users for newsgroups. For example, the average number of users associated to a project in a newsgroup channel.

Additional Information :

Back to top


org.eclipse.scava.factoid.newsgroups.weekly

This plugin generates the factoid regarding weekly user engagement for newsgroups. For example, the average number of comments per week. This can be used to present the most and least busy week in terms of engagement for a particular project.

Additional Information :

Back to top


Factoids for Documentation

These factoids are related to documentation.

Back to top


org.eclipse.scava.factoid.documentation.entries

This plugin generates the factoid regarding which sections have been found and which are missing in the documentation. This can help understanding which sections should be added or better indicated to have a better documentation.

Additional Information :

Back to top


org.eclipse.scava.factoid.documentation.sentiment

This plugin generates the factoid regarding sentiment for documentation.

Additional Information :

Back to top