We want to make contributing to this project as easy and transparent as possible.
New versions are being developed in the "dev" branch, or in their own feature branch. When they are deemed ready for a release, they are merged into "release".
As a consequence, all contributions must stage first through "dev" or their own feature branch.
We actively welcome your pull requests.
dev
.In order to accept your pull request, we need you to submit a CLA. You only need to do this once to work on any of Facebook's open source projects.
Complete your CLA here: https://code.facebook.com/cla
Zstd uses a branch-based workflow for making changes to the codebase. Typically, zstd will use a new branch per sizable topic. For smaller changes, it is okay to lump multiple related changes into a branch.
Our contribution process works in three main stages:
Update:
Checkout your fork of zstd if you have not already
git checkout https://github.com/<username>/zstd
cd zstd
Update your local dev branch
git pull https://github.com/facebook/zstd dev
git push origin dev
Topic and development:
Make a new branch on your fork about the topic you're developing for
# branch names should be concise but sufficiently informative
git checkout -b <branch-name>
git push origin <branch-name>
Make commits and push
# make some changes =
git add -u && git commit -m <message>
git push origin <branch-name>
Note: run local tests to ensure that your changes didn't break existing functionality
Quick check
make shortest
Longer check
make test
Static analysis is a process for examining the correctness or validity of a program without actually
executing it. It usually helps us find many simple bugs. Zstd uses clang's scan-build
tool for
static analysis. You can install it by following the instructions for your OS on https://clang-analyzer.llvm.org/scan-build.
Once installed, you can ensure that our static analysis tests pass on your local development machine by running:
make staticAnalyze
In general, you can use scan-build
to static analyze any build script. For example, to static analyze
just contrib/largeNbDicts
and nothing else, you can run:
scan-build make -C contrib/largeNbDicts largeNbDicts
scan-build
is part of our regular CI suite. Other static analyzers are not.
It can be useful to look at additional static analyzers once in a while (and we do), but it's not a good idea to multiply the nb of analyzers run continuously at each commit and PR. The reasons are :
This is different from running a static analyzer once in a while, looking at the output, and cherry picking a few warnings that seem helpful, either because they detected a genuine risk of bug, or because it helps expressing the code in a way which is more readable or more difficult to misuse. These kinds of reports can be useful, and are accepted.
CI tests run every time a pull request (PR) is created or updated. The exact tests that get run will depend on the destination branch you specify. Some tests take longer to run than others. Currently, our CI is set up to run a short series of tests when creating a PR to the dev branch and a longer series of tests when creating a PR to the release branch. You can look in the configuration files of the respective CI platform for more information on what gets run when.
Most people will just want to create a PR with the destination set to their local dev branch of zstd. You can then find the status of the tests on the PR's page. You can also re-run tests and cancel running tests from the PR page or from the respective CI's dashboard.
Almost all of zstd's CI runs on GitHub Actions (configured at .github/workflows
), which will automatically run on PRs to your
own fork. A small number of tests run on other services (e.g. Travis CI, Circle CI, Appveyor).
These require work to set up on your local fork, and (at least for Travis CI) cost money.
Therefore, if the PR on your local fork passes GitHub Actions, feel free to submit a PR
against the main repo.
A small number of tests cannot run on GitHub Actions, or have yet to be migrated. For these, we use a variety of third-party services (listed below). It is not necessary to set these up on your fork in order to contribute to zstd; however, we do link to instructions for those who want earlier signal.
Service | Purpose | Setup Links | Config Path |
---|---|---|---|
Travis CI | Used for testing on non-x86 architectures such as PowerPC | https://docs.travis-ci.com/user/tutorial/#to-get-started-with-travis-ci-using-github https://github.com/marketplace/travis-ci |
.travis.yml |
AppVeyor | Used for some Windows testing (e.g. cygwin, mingw) | https://www.appveyor.com/blog/2018/10/02/github-apps-integration/ https://github.com/marketplace/appveyor |
appveyor.yml |
Cirrus CI | Used for testing on FreeBSD | https://github.com/marketplace/cirrus-ci/ | .cirrus.yml |
Circle CI | Historically was used to provide faster signal, but we may be able to migrate these to Github Actions |
https://circleci.com/docs/2.0/getting-started/#setting-up-circleci https://youtu.be/Js3hMUsSZ2c https://circleci.com/docs/2.0/enable-checks/ |
.circleci/config.yml |
Note: the instructions linked above mostly cover how to set up a repository with CI from scratch. The general idea should be the same for setting up CI on your fork of zstd, but you may have to follow slightly different steps. In particular, please ignore any instructions related to setting up config files (since zstd already has configs for each of these services).
Performance is extremely important for zstd and we only merge pull requests whose performance landscape and corresponding trade-offs have been adequately analyzed, reproduced, and presented. This high bar for performance means that every PR which has the potential to impact performance takes a very long time for us to properly review. That being said, we always welcome contributions to improve performance (or worsen performance for the trade-off of something else). Please keep the following in mind before submitting a performance related PR:
Performance microbenchmarking is a tricky subject but also essential for Zstd. We value empirical testing over theoretical speculation. This guide it not perfect but for most scenarios, it is a good place to start.
Unfortunately, the most important aspect in being able to benchmark reliably is to have a stable benchmarking machine. A virtual machine, a machine with shared resources, or your laptop will typically not be stable enough to obtain reliable benchmark results. If you can get your hands on a desktop, this is usually a better scenario.
Of course, benchmarking can be done on non-hyper-stable machines as well. You will just have to do a little more work to ensure that you are in fact measuring the changes you've made not and noise. Here are some things you can do to make your benchmarks more stable:
hot
when running anything. The observations
you collect during that time period will very different from the true performance number. Having
a very large number of sample will help alleviate this problem slightly but you can also
address is directly by simply not including the first n
iterations of your benchmark in
your aggregations. You can determine n
by simply looking at the results from each iteration
and then hand picking a good threshold after which the variance in results seems to stabilize.Also check our LLVM's guide on benchmarking here: https://llvm.org/docs/Benchmarking.html
The fastest signal you can get regarding your performance changes is via the in-build zstd cli
bench option. You can run Zstd as you typically would for your scenario using some set of options
and then additionally also specify the -b#
option. Doing this will run our benchmarking pipeline
for that options you have just provided. If you want to look at the internals of how this
benchmarking script works, you can check out programs/benchzstd.c
For example: say you have made a change that you believe improves the speed of zstd level 1. The
very first thing you should use to assess whether you actually achieved any sort of improvement
is zstd -b
. You might try to do something like this. Note: you can use the -i
option to
specify a running time for your benchmark in seconds (default is 3 seconds).
Usually, the longer the running time, the more stable your results will be.
$ git checkout <commit-before-your-change>
$ make && cp zstd zstd-old
$ git checkout <commit-after-your-change>
$ make && cp zstd zstd-new
$ zstd-old -i5 -b1 <your-test-data>
1<your-test-data> : 8990 -> 3992 (2.252), 302.6 MB/s , 626.4 MB/s
$ zstd-new -i5 -b1 <your-test-data>
1<your-test-data> : 8990 -> 3992 (2.252), 302.8 MB/s , 628.4 MB/s
Unless your performance win is large enough to be visible despite the intrinsic noise on your computer, benchzstd alone will likely not be enough to validate the impact of your changes. For example, the results of the example above indicate that effectively nothing changed but there could be a small <3% improvement that the noise on the host machine obscured. So unless you see a large performance win (10-15% consistently) using just this method of evaluation will not be sufficient.
There are a number of great profilers out there. We're going to briefly mention how you can
profile your code using instruments
on mac, perf
on linux and visual studio profiler
on Windows.
Say you have an idea for a change that you think will provide some good performance gains for level 1 compression on Zstd. Typically this means, you have identified a section of code that you think can be made to run faster.
The first thing you will want to do is make sure that the piece of code is actually taking up a notable amount of time to run. It is usually not worth optimizing something which accounts for less than 0.0001% of the total running time. Luckily, there are tools to help with this. Profilers will let you see how much time your code spends inside a particular function. If your target code snippet is only part of a function, it might be worth trying to isolate that snippet by moving it to its own function (this is usually not necessary but might be).
Most profilers (including the profilers discussed below) will generate a call graph of functions for you. Your goal will be to find your function of interest in this call graph and then inspect the time spent inside of it. You might also want to look at the annotated assembly which most profilers will provide you with.
We will once again consider the scenario where you think you've identified a piece of code whose performance can be improved upon. Follow these steps to profile your code using Instruments.
Time Profiler
from the list of standard templatesRun your benchmarking script from your terminal window
I will just use benchzstd as my benchmarmking script for this example:
$ zstd -b1 -i5 <my-data> # this will run for 5 seconds
Once you run your benchmarking script, switch back over to instruments and attach your process to the time profiler. You can do this by:
All Processes
drop down in the top left of the toolbar.zstd
You profiler will now start collecting metrics from your benchmarking script. Once you think you have collected enough samples (usually this is the case after 3 seconds of recording), stop your profiler.
Make sure that in toolbar of the bottom window, profile
is selected.
You should be able to see your call graph.
-g
flag alone with your build script. You might also
have to provide the -fno-omit-frame-pointer
flagDig down the graph to find your function call and then inspect it by double clicking the list item. You will be able to see the annotated source code and the assembly side by side.
This wiki has a pretty detailed tutorial on getting started working with perf so we'll leave you to check that out of you're getting started:
https://perf.wiki.kernel.org/index.php/Tutorial
Some general notes on perf:
perf stat -r # <bench-program>
to quickly get some relevant timing and
counter statistics. Perf uses a high resolution timer and this is likely one
of the first things your team will run when assessing your PR.perf --list
.
When measuring optimizations, something worth trying is to make sure the hardware
counters you expect to be impacted by your change are in fact being so. For example,
if you expect the L1 cache misses to decrease with your change, you can look at the
counter L1-dcache-load-misses
TODO
We use GitHub issues to track public bugs. Please ensure your description is clear and has sufficient instructions to be able to reproduce the issue.
Facebook has a bounty program for the safe disclosure of security bugs. In those cases, please go through the process outlined on that page and do not file a public issue.
It's a pretty long topic, which is difficult to summarize in a single paragraph. As a rule of thumbs, try to imitate the coding style of similar lines of codes around your contribution. The following is a non-exhaustive list of rules employed in zstd code base:
This code base is following strict C90 standard,
with 2 extensions : 64-bit long long
types, and variadic macros.
This rule is applied strictly to code within lib/
and programs/
.
Sub-project in contrib/
are allowed to use other conventions.
All public symbol declarations must be wrapped in extern “C” { … }
,
so that this project can be compiled as C++98 code,
and linked into C++ applications.
This design requirement is fundamental to preserve the portability of the code base.
lib/
, this policy is even more drastic.
The only external dependencies allowed are <assert.h>
, <stdlib.h>
, <string.h>
,
and even then, not directly.
In particular, no function shall ever allocate on heap directly,
and must use instead ZSTD_malloc()
and equivalent.
Other accepted non-symbol headers are <stddef.h>
and <limits.h>
.Within the project, there is a strict hierarchy of dependencies that must be respected.
programs/
is allowed to depend on lib/
, but only its public API.
Within lib/
, lib/common
doesn't depend on any other directory.
lib/compress
and lib/decompress
shall not depend on each other.
lib/dictBuilder
can depend on lib/common
and lib/compress
, but not lib/decompress
.
Functions in lib/
must use very little stack space,
several dozens of bytes max.
Everything larger must use the heap allocator,
or require a scratch buffer to be emplaced manually.
ZSTD_
libzstd
source code can be amalgamated,
each symbol name must attempt to be (and remain) unique.
Avoid too generic names that could become ground for future collisions.
This generally implies usage of some form of prefix.PREFIX_camelCase
.
PREFIX_prefix2_camelCase
PREFIX_camelCase_extendedQualifier
ZSTD_createCCtx()
goBackward
rather than notGoForward
struct
, etc.) follow similar convention,
except that they are allowed and even invited to start by an Uppercase letter.
Example : ZSTD_CCtx
, ZSTD_CDict
PREFIX_NAME_QUALIFIER
) apply.snake_case
.
File names must be unique across the entire code base,
even when they stand in clearly separated directories.const
friendly, if not const
fanatical.
Any variable that can be const
(aka. read-only) must be const
.
Any pointer which content will not be modified must be const
.
This property is then controlled at compiler level.
const
variables are an important signal to readers that this variable isn't modified.
Conversely, non-const variables are a signal to readers to watch out for modifications later on in the function.FORCE_INLINE_ATTR
,
defined in lib/common/compiler.h
.DEBUGLOG(level, ...)
, defined in lib/common/debug.h
.int nbTokens = n;
instead of int i = n; /* i is a nb of tokens *./
.
Example 2 : assert(size > 0);
instead of /* here, size should be positive */
.By contributing to Zstandard, you agree that your contributions will be licensed under both the LICENSE file and the COPYING file in the root directory of this source tree.