CI Bug Log

CI Bug Log is a web service which automates the process of analysing unit test failures as much as possible. It is written in python, is django-based, and is meant to be run in conjunction with PostgreSQL.

How to deploy a development instance for development purposes

The tool is still in development and is not ready to be deployed in production. You may however want to contribute to the project, so you need information to set up a development instance.

This guide assumes you already have a PostgreSQL server up and running, with credentials and a database for the service.

The recommended way to deploy the instance is to use python’s venv and pip.

# Download the piglit dependency
$ cd <directory of this README file>
$ git submodule init
$ git submodule update

# Create a virtual environment to run the instance
$ python3 -m venv .

# Setup the configuration (edit all the parameters)
$ cp setup_env.sh.sample setup_env.sh
$ $EDITOR setup_env.sh
$ source setup_env.sh

# Install all the dependencies
$ pip install -r requirements.txt

# Create a superuser
$ ./manage.py createsuperuser

# Run the instance (port 8000, restricted to localhost)
$ ./manage.py runserver 127.0.0.1:8000

Now, visit the url http://127.0.0.1:8000/admin/. Log in with the super-user credentials, and create entries for the following categories:

  • CIResults/bugtracker: Bug trackers used to store bugs (GitLab, JIRA or Bugzilla)

  • CIResults/testsuite: Testsuites used by your project

  • CIResults/component: Components used by your project (Linux, TestSuite, Library, …)

Finally, make sure to set-up a pre-commit hook that runs all the tests:

$ cat .git/hooks/pre-commit
#!/bin/bash
exec tox -p all -e py36,py38-coverage,pep8

If you would like to speed up the execution of the tests on your development platform by 2x, add the following lines to your postgresql.conf file:

fsync = false
full_page_writes = false
synchronous_commit = off

Running CI Bug Log in production using docker

The recommended way to deploy this website is to use the docker image found in our registry: registry.freedesktop.org/gfx-ci/cibuglog:latest

This image contains:

  • CI Bug Log, along with all its dependencies

  • uwsgi: run the website and expose it through the port 8000

Set up

First, make sure your machine is ready to execute docker images:

$ docker run hello-world
Unable to find image 'hello-world:latest' locally

latest: Pulling from library/hello-world
d1725b59e92d: Pull complete
Digest: sha256:0add3ace90ecb4adbf7777e9aacf18357296e799f81cabc9fde470971e499788
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/

For more examples and ideas, visit:
https://docs.docker.com/get-started/

Secondly, make sure that PostgreSQL is listening to docker’s subnet:

# 0) Create a user, password and database (use Google)

# 1) Check what is docker's network (172.17.0.1 in this case)
$ ip addr
  [...]
  3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
      link/ether 02:42:b5:48:09:01 brd ff:ff:ff:ff:ff:ff
      inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
           valid_lft forever preferred_lft forever
      inet6 fe80::42:b5ff:fe48:901/64 scope link
           valid_lft forever preferred_lft forever
  [...]

# 2) Configure PostgreSQL to listen on the docker subnet
# $EDITOR /var/lib/postgres/data/postgresql.conf
  [...]
  listen_addresses = 'localhost,172.17.0.1'
  [...]

# 3) Tell PostgreSQL how to authenticate users from this subnet
# echo "host    all             all             172.17.0.1/16           md5" > /var/lib/postgres/data/pg_hba.conf

Thirdly, we are ready to run CI Bug Log:

# Set all the parameters
# wget https://gitlab.freedesktop.org/gfx-ci/cibuglog/raw/master/docker.env.sample
# mv docker.env.sample /etc/cibuglog.env
# $EDITOR /etc/cibuglog.env  # Make sure to edit the SECRET KEY and set the DB information

# Set up a way to execute commands in the container (prefix all commands with this)
# wget https://gitlab.freedesktop.org/gfx-ci/cibuglog/raw/master/cibuglog-docker-run.sh
# mv cibuglog-docker-run.sh /usr/bin/cibuglog-docker-run
# chmod +x /usr/bin/cibuglog-docker-run

# Execute the image and expose cibuglog on the port 80 of the HOST
$ docker run --rm --env-file /etc/cibuglog.env -p 80:8000 --name cibuglog registry.freedesktop.org/gfx-ci/cibuglog:latest

Finally, let’s create a super user and access the admin:

$ cibuglog-docker-run ./manage.py createsuperuser
$ xdg-open http://127.0.0.1/admin

If all went well, congratulations!

Running CI Bug Log as a service

For actual production, you might want to use the following systemd unit:

/etc/systemd/system/cibuglog.service:

[Unit]
Description=CI Bug Log's docker container
After=network.target

[Service]
Type=simple
ExecStartPre=/usr/bin/docker pull registry.freedesktop.org/gfx-ci/cibuglog:latest
ExecStart=/usr/bin/docker run --rm --env-file /etc/cibuglog.env -p 8080:80 --name cibuglog registry.freedesktop.org/gfx-ci/cibuglog:latest
PrivateDevices=yes

[Install]
WantedBy=multi-user.target

If you want to run CI Bug Log along other web services, you might want to consider using a separate web server like nginx as a reverse proxy. In this case, you’ll just need to change the port 80 above to whatever local port and point nginx to it :)

That’s all, folks!

Adding components, builds and testsuite runs

NOTICE: Don’t forget to prefix all of these commands with cibuglog-docker-run if you deployed the service using docker.

Components and testsuites

The first step before importing builds and test results is to create the list of components and test suites.

If a piece of software is a dependency for the execution of your tests, then it should be added as a component. To create a new component, you need to provide a unique name (Linux for example), a description of what this component does, and a URL to the project for further information about it.

If a software produces test results (like IGT), then you should create a new test suite. Test suites are a component, but with additional fields. The acceptable statuses field allow specifying which result will not be considered as failures (pass and notrun for IGT) while the notrun status field should be set to the name of the status representing notruns.

After creating all the components/test suites, you can move to importing builds!

Builds

All build names need to be unique. It is recommended to store the build information in one folder, like so: builds/$build_name/

Build information format

The $build_name directory should contain at least a build.ini file. Here is an example file:

[CIRESULTS_BUILD]
name = IGT_4123     # This has to be unique in the database
component = IGT     # Component or Testsuite name
repo_type = git
repo = git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
branch = master
version = d5e51a60e5cbb807bcacd2655bd4ffe90a686bbb
upstream_url = https://cgit.freedesktop.org/xorg/app/intel-gpu-tools/commit/?id=d5e51a60e5cbb807bcacd2655bd4ffe90a686bbb
parameters_file = kernel.config.bz2
build_log_file = kernel_build.log
parents = IGT_4121 IGT_4122  # parent builds (will be used to suggest comparisons)

In this example, the $build_name directory would also contain the following files:

  • kernel.config.bz2

  • kernel_build.log

if the list parameters for the build are short, you may want to replace the ‘parameters_file’ line with for example ‘parameters = –prefix=/usr –enable-foo’.

Adding a build

If you followed the instructions above, importing a build can be done with the following command:

$ ./add_build.py path/to/build/folder/

The command should return without any output.

Adding a build without a build.ini file

If you do not want to use a build.ini file to import a run, you may give parameters directly to add_build.py. The minimum list of parameters necessary to create a new build are:

  • name: Name of the build. Has to be unique.

  • component: Component from which this build is

  • version: Version of this build (git sha1, revision number, …)

Testsuite runs

Testsuite runs are part of a Run configuration (runconfig). The purpose of a runconfig is to aggregate multiple testsuite runs under a single name which represents a single HW and SW configuration. This allows for simple comparisons between different run configurations.

It is recommended that you store tests results using the following directory hierarchy: $runconfig/$testsuite/$machine/$shard_id/{result files}

Runconfig information format

The $runconfig directory should contain a runconfig.ini file. Here is an example file:

[CIRESULTS_RUNCONFIG]
name = RUNCFG_1                        # Unique name in the database
builds = IGT_0 RENDERCHECK_0 LINUX_0   # List of builds used (put here all your common SW config)
tags = DebugBuild PostMerge Anything   # Tags set on your runconfig (useful for bug matching)

[IGT]           # Testsuite name (must match $testsuite, not necessarily the name in the database)
build = IGT_0   # Build name of this testsuite (must be in the above builds)
format = piglit # Format in which the results are stored
version = 1     # Version of the format in which the results are stored

[RENDERCHECK]
build = RENDERCHECK_0
format = piglit # Version can be omited, defaults to 1

Right now, only the piglit format is supported by CI Bug Log. If you want to add support for your test results, you will need to edit CIResults/run_import.py.

Adding a runconfig

If you followed the instructions above, importing a runconfig can be done with the following command:

$ ./add_run.py path/to/runconfig/folder/
adding 71 missing machine(s)
adding 2716 missing test(s) (IGT)
adding 2 missing statuse(s) (IGT)
adding 215 testsuite runs
adding 22840 test results
matched 266 failures (0 known, 266 unknown) in 8.07 ms
Updating the statistics of 1 issues, and 1 filters
Updating Issue 1/1
Updating filter 1/1

You may now visit http://127.0.0.1:8000 and explore the list of failures.

Adding a build without a runconfig.ini file

If you do not want to use a runconfig.ini file to import a run, you may give parameters directly to add_run.py. The minimum list of parameters necessary to create a new runconfig is the name and its tags. Once a runconfig has been created, it is not possible anymore to change its tags.

If you want to add testsuite run results, you will also need to add the ‘results’ parameter which point to a file containing the list of results to import, or ‘-‘ if you want to read this from stdin. The file needs to have one entry per line, and the following space-separated fields in the following order:

  • Testbuild name: Build of the testsuite that produced the results (has to be .. code-block:: guess

    in the list of builds)

  • Results format: Format in which the results are stored

  • Results format version: Version of the format in which the results are stored

  • Machine: Name of the machine that generates the results

  • Testsuite Run ID: An integer that should not be repeated for the same .. code-block:: guess

    runconfig and machine.

  • Results path: Path to the results on the filesystem

Here is an example of such a file:

IGT_4345 piglit 1 shard-apl4 25 results/IGT/CI_DRM_3929/shard-apl4/25/
IGT_4345 piglit 1 shard-apl5 3 results/IGT/CI_DRM_3929/shard-apl5/3/

Adding a temporary runconfig

If the runconfig you are trying to import is not meant to participate in statistics and may be deleted at any time, you need to create the run as temporary. This is for example useful for pre-merge results.

To do so, either add the “temporary: True” line in the ‘CIRESULTS_RUNCONFIG’ section of the runconfig.ini, or add the -T parameter to add_run.py if you are importing the run without a runconfig.ini.

Deleting a runconfig

To delete a runconfig (temporary or not), just run the following command line:

$ ./add_run.py -d $runconfig_name

Filing issues

Now that you have failures referenced in your database, you will want to match these failures to bugs. To do so, you need to go to the main view (http://127.0.0.1:8000) and click on “File Issue”.

There, you can input a list of bug IDs (on multiple bug trackers), create filters (or import them from other issues), and add other information.

The most important part is the ‘Filters’ column, as this is where the matching automation happens. Click on the [+] button which should open a “Create new filter” modal window. There, you’ll be asked to input a short description of what your filter matches, along with the list of runconfig tags, machines, tests and test result statuses that need to be set for the filter to match. You may also further refine the filter using regular expressions on stdout, stderr, or dmesg (kernel logs).

After any modification, the dialog should tell you how many unknown failures are this filter would cover. Tooltips are used to tell which tags, machines, tests, and statuses are being covered. Press ‘Create’ to create the filter and add it to the issue.

When you are happy with your filters and list of bugs associated, press “Save”. You will return to the main page, and one new active issue will be present. When new failures match the filters associated with this issue, they will automatically be considered as known failures, and you will contribute to the failure rate statistics of the issue/filters.

Dealing with public/private machines, tests, or testsuites

To be written later. The database contains ‘public’ fields to state if the data should be public or not.

This feature is WIP.

Suppressing tests, machines, or testsuites for continuous integration

To be written later. The database contains ‘vetted’ fields to state whether the test/machine/testsuite is considered stable-enough to be used for continuous integration.

This feature is WIP.

FAQ

Why is CI Bug Log only supporting PostgreSQL?

This is a matter of performance. If your database of choice supports retrieving the primary key after a bulk_create then it is not supported and will not work.

See Django’s bulk_create

Indices and tables