We use SonarQube for static code analysis scans. These are automated via Travis, same as our usual builds.
- Overview; Changes of.travis.yml; Project example; Overview. This tutorial is a continuation of the part 1.Here I will show how to set up the compiles files from TravisCI for the analysis of Coverity Scan. While the official TravisCI Integration guide provides all the necessary information on how to perform it step by step, I will concentrate only on the parts that needs special.
- Ok, this is a strange one: The coverity build fails on Windows with this errors: filesystem.lib(NptXbmcFile.cpp.obj): error LNK2005: "public: static char const.
- Coverity is a commercial C static code analyzer from Synopsys, but it is free for open-source projects. There is a script in the Git repository to run this tool over the codebase and upload the result.
Static analysis with Coverity Sacn Structure. ├── CMakeLists.txt ├── app │ └── main.cpp ├── include │ ├── example.h │ └── exampleConfig.h.in ├── src │ └── example.cpp └── tests ├── dummy.cpp └── main.cpp.
We don't use the Travis' plugin for Sonar due to the fact we use Docker and not the bare Travis, and these two are not compatible.
We have a dedicated
docker-compose target for scans,
sonar-scan. Necessary values are passed to Sonar scanner as command-line parameters.
For the whole config to work, the following one-time configuration steps are necessary:
- Create organization and project in SonarQube - done already, https://sonarcloud.io/organizations/mraa-github (key: mraa-github) and https://sonarcloud.io/dashboard?id=mraa-master (key: mraa-master);
- Create technical account on GitHub with push permissions for mraa repo. It is used for reporting pull request statuses in the 'checks' area. We have
intel-iot-devkit-helperbotfor this, shared with UPM.
- Add several environment variables in Travis:
GITHUB_TOKEN(secure) - GH OAuth token for the technical user, with
SONAR_TOKEN(secure) - this one comes from the SonarQube org properties;
SONAR_PROJ_KEY(may be public) - project and org keys (names) from SonarQube org, see above;
These scans are executed each time there's an internal pull request (from a branch local to main mraa repo) or a
master branch push. Upon the former the so called 'preview' scan is executed, which doesn't upload anything to SonarQube organization and only reports the result within the PR. Upon
master branch push a normal scan is executed and results are uploaded to SonarQube. Backyard sports games pc.
When there's a so called 'external' pull request (originating somewhere else than mraa's main repo, e.g. from a fork), no scan is done for security reasons, as code within such PR would have access to tokens listed above.
In view of such setup, it's beneficial to create internal pull requests as much as possible, because you'll catch problems right away - in the preview scan, before PR is merged.
It's a good practice to run the scan manually before actually submitting a PR. There may also be a need to run the scan manually out-of-cycle, so here's how.
Just use the command line from the scanner script. See
sonar_cmd_base variable specifically and just replace various tokens listed there with proper ones. Please also don't forget that you need to run the build wrapper first, so that the scanner knows what to scan.
The set of commands for the main mraa repo and SonarQube project would look like the below. Note that it will upload results to the SonarQube by default, if you don't want that, setup a throwaway 'project' in SonarQube, or create a separate 'organization' dedicated to your mraa repo fork:
Notice that we first set the
PATH to point to our downloaded copy of Sonar tools. You can find more information on setting these up in SonarQube's nice Getting Started tutorial.
In the past we've used Coverity to do static code analysis scans. Below is the documentation on that setup - for archiving purposes.
This is the procedure to submit a build to Coverity. You'll need to install
coverity-submit for your OS.
The Persistency packages are scanned regularly using the Coverity code analyzer.
This is done thanks to the infrastructure provided by the EP-SFT group. This consists of a Coverity Connect server on coverity.cern.ch and a build node on buildcoverity.cern.ch.
1. Coverity server (set up the database to host the results of the Coverity scans)
The results of the Coverity scans for the Persistency packages can be viewed and analyzed by logging on the Coverity Connect instance on coverity.cern.ch.
Prepare projects, streams, triage stores and component maps for Coverity scans
To use the Coverity Connect server, refer to the Coverity Usage and Administration Guide (authentication required). Before using Coverity, you or your admin must have configured the following sets of server-side entities for you:
- One or more users and passwords
- One or more projects, owned by the relevant users
- It is best to define separate projects for software packages whose defects should be analyzed independently, because the user-level view defines a project as the top-level context that can be changed with a drop-down menu
- One stream within each project, owned by the relevant users
- When defects are committed to the database from a software build, they are sent to a single stream, which automatically defines the project they belong to
- It is possible to have several streams under the same project, then define special menus in the user dashboard to see within a project only the defects coming from a well-defined stream
- In principle it may also be possible to share the same stream across projects by creating in project B a 'link' for a stream belonging to project A, but the benefits of this are not clear
- Optionally, one or more triage stores to store defect and triage history, owned by the relevant users
- One stream uses one and only one triage store, but the same triage store may be used by different streams (e.g. the Default triage store is used by many streams)
- If a dedicated triage store is not created, the Default triage store is used
- Optionally, one or more component maps to categorize defects by component within each project
- One stream uses one and only one component map, but the same componenet map may be used by different streams (e.g. the Default component map is used by many streams)
- If a dedicated component map is not created, the Default component map used
- Unlike projects, streams and triage stores, component maps are all owned by the admin user and their ownership may not be transferred
- Managing component maps requires the global (not per-project) role 'project admin' (see 'managing custom roles')
The following setup is being used for CORAL and COOL as of January 2016:
- Projects: two separate projects CORAL and COOL have been created
- Streams: two separate streams CORAL-Stream-trunk and COOL-Stream-trunk have been created, one in each of the CORAL and COOL projects
- Triage stores: a single triage store CORALCOOL-TriageStore has been created and has been associated to the two streams above
- Component maps: a single component map CORALCOOL-CompMap has been created and has been associated to the two streams above
- Within this component map, several components such as 'system', 'gcc', 'Boost', 'Qt', 'lcgexternal', 'PyCool', 'CORAL_SERVER' have been defined with associated file name rules
- Users: the two users valassi and avalassi can be equivalently used, they are project owners, stream owners and triage store owners for all entities described above
2. Coverity build node (build, analyze and commit Coverity scans)The scans are prepared on a specific SLC6 node buildcoverity.cern.ch, where Coverity is installed. The new infrastructure allows software builds with c++11, which could not be used on the previous machine.
Log in using your AFS user name (e.g. avalassi):
Prepared directories for the Coverity scansThe Persistency scans are prepared in
/builda/Persistencyand its subdirectories.
Extended attribute ACLs (see
setfacl) can be used to make sure that all relevant users can read and write the same files there.
Several subdirectories for the scans (two top-level directories for CORAL and COOL, with a single subdirectory for the trunk in each project) have been created from scratch in January 2016: please use the existing tags/platforms (update them from SVN if necessary) and do not add any other tags/platforms.
/builda/Persistency/<project>/trunk directory, the results are created in a
cov-out subdirectory (which must be deleted before each new scan), at the same level as
src and the build and installation directories. For instance:
Execute the Coverity scans: build, analyze, commitIn the following, we will describe how to prepare the scans for CORAL and COOL (the latter using the former). To prepare the Coverity scans, you must do the following:
- Check out the appropriate version of the code.
- Check that the local COOL build is set up to use the local CORAL build (this should contain
- Set up the build environment (conventionally, we use the x86_64-slc6-gcc49-dbg platform). Then, remove old build fragments and old Coverity scan results and finally build the code through the Coverity wrapper. This is done via the build.sh script attached to this page.
- If the project build through the Coverity wrapper is successful, the following output should be produced. For reference, the build on 2016 January 27 gave 478 (100%) for CORAL trunk after 5 minutes and 196 (100%) for COOL30x after 3 minutes.
- After building the code, let Coverity analyze the results. For reference, the analysis of the 2016 January 27 builds took 2 minutes and found 31 defects for CORAL and it took 1 minutes and found 43 defects for COOL. This is done via the analyze.sh script attached to this page.
Coverity Cmake C
- After analyzing the code through Coverity, commit the results to the database so that it gets published on coverity.cern.ch. This is done via the commit.sh script attached to this page. Note that this step is quite slow: for reference, for the analysis of the 2016 January 27 builds it took 4 and 3 minutes to commit (in parallel!) the CORAL and COOL defects, respectively.
3. Coverity server (analyze and triage the results of the Coverity scans)
Coverity Cmake Set
After committing the results of Coverity scans, go back to the Coverity server. You may then analyze and triage all defects (and iterate after fixing them if needed).
Coverity Cmake Link
- As of July 2012, ALL issues found by Coverity in CORAL and COOL had been fixed (an/or at least properly documented and filed for later resolution). External issues dur for instance to Boost, libstdc++ or ROOT had been dismissed (and in the case of ROOT, e.g. in the PyCool build, reported as bug ROOT-4380). The POOL issues will not be fixed as the relevant code has been moved to an ATLAS specific package.
- The relevant Savannah tickets are task #20073 for COOL, task #20075 for CORAL, bug #95365 for CORAL_SERVER and task #20074 for POOL.
- As described in these tickets, some issues can only be fixed with API changes in CORAL and COOL. These API changes were going to be released in November 2013 in CORAL 2.4.0 and COOL 2.9.0, but previously they had already been committed to the code protected with
#ifdef COOL290COguards. In order to validate these changes, the
VersionInfo.hfiles in the two projects had been locally modified to enable these and some other API extensions through the Coverity builds in July 2012. The COOL build had also been configured (using valassi's private requirements) to use the local CORAL build with the relevant API extensions too.
- In June 2015, Coverity scans were resumed, on a new PH-SFT Coverity server supporting c++11 for the first time. The gcc49 compiler, with c++11 enabled, was used for the first time (CMTCONFIG=x86_64-slc6-gcc49-dbg). Builds of the CORAL_3xx and COOL_3xx branches, using c++11 also in the public API, were executed for the first time. Builds were again (and for the last time) executed using CMT. ALL issues found by Coverity were fixed (and/or triaged and dismissed).
- See CORALCOOL-2768 for more detaills.
- In January 2016, Coverity scans have been again resumed, after the move from CMT to cmake and the cleanup of the SVN repository (moving active development from the 3xx branch to trunk). The gcc49 compiler, with c++11 enabled, is being used again, this time using cmake for the first time (BINARY_TAG=x86_64-slc6-gcc49-dbg). Builds are now executed on CORAL and COOL trunk directly. Note en passant that ccache is disabled in these cmake builds, as missing defect files would be interpreted as due to fixed defects or source files removed from the repository. ALL issues found by Coverity have been triaged and dismissed (all real issues had already been fixed in previous campaigns).
-- AndreaValassi - 2016-01-27