Counting the number of rows in the table (if *
or constant is specified as the argument) or non-empty values in a table column (if the column name is specified as an argument).
Like other aggregate functions, it can be combined with GROUP BY to get statistics on the parts of the table that correspond to the values in the columns being grouped. Use the modifier DISTINCT to count distinct values.
SELECT COUNT(*) FROM my_table;
SELECT key, COUNT(value) FROM my_table GROUP BY key;
SELECT COUNT(DISTINCT value) FROM my_table;
Minimum or maximum value.
As an argument, you may use an arbitrary computable expression with a numeric result.
SELECT MIN(value), MAX(value) FROM my_table;
Sum of the numbers.
As an argument, you may use an arbitrary computable expression with a numeric result.
Integers are automatically expanded to 64 bits to reduce the risk of overflow.
SELECT SUM(value) FROM my_table;
Arithmetic average.
As an argument, you may use an arbitrary computable expression with a numeric result.
Integer values and time intervals are automatically converted to Double.
SELECT AVG(value) FROM my_table;
Number of rows for which the expression specified as the argument is true (the expression's calculation result is true).
The value NULL
is equated to false
(if the argument type is Bool?
).
The function does not do the implicit type casting to Boolean for strings and numbers.
SELECT
COUNT_IF(value % 2 == 1) AS odd_count
{% note info %}
To count distinct values in rows meeting the condition, unlike other aggregate functions, you can't use the modifier DISTINCT because arguments contain no values. To get this result, use in the subquery the built-in function IF with two arguments (to get NULL
in else), and apply an outer COUNT(DISTINCT ...) to its result.
{% endnote %}
Sum or arithmetic average, but only for the rows that satisfy the condition passed by the second argument.
Therefore, SUM_IF(value, condition)
is a slightly shorter notation for SUM(IF(condition, value))
, same for AVG
. The argument's data type expansion is similar to the same-name functions without a suffix.
SELECT
SUM_IF(value, value % 2 == 1) AS odd_sum,
AVG_IF(value, value % 2 == 1) AS odd_avg,
FROM my_table;
When you use aggregation factories, a Tuple
containing a value and a predicate is passed as the first AGGREGATE_BY argument.
$sum_if_factory = AggregationFactory("SUM_IF");
$avg_if_factory = AggregationFactory("AVG_IF");
SELECT
AGGREGATE_BY(AsTuple(value, value % 2 == 1), $sum_if_factory) AS odd_sum,
AGGREGATE_BY(AsTuple(value, value % 2 == 1), $avg_if_factory) AS odd_avg
FROM my_table;
Get the value for an expression specified as an argument, for one of the table rows. Gives no guarantee of which row is used. It's similar to the any() function in ClickHouse.
Because of no guarantee, SOME
is computationally cheaper than MIN / MAX often used in similar situations.
SELECT
SOME(value)
FROM my_table;
{% note alert %}
When the aggregate function SOME
is called multiple times, it's not guaranteed that all the resulting values are taken from the same row of the source table. To get this guarantee, pack the values into any container and pass it to SOME
. For example, in the case of a structure, you can apply AsStruct
{% endnote %}
Approximating the number of unique values using the HyperLogLog algorithm. Logically, it does the same thing as COUNT(DISTINCT ...), but runs much faster at the cost of some error.
Arguments:
By selecting accuracy, you can trade added resource and RAM consumption for decreased error.
All the three functions are aliases at the moment, but CountDistinctEstimate
may start using a different algorithm in the future.
SELECT
CountDistinctEstimate(my_column)
FROM my_table;
SELECT
HyperLogLog(my_column, 4)
FROM my_table;
Get all column values as a list. When combined with DISTINCT,
it returns only distinct values. The optional second parameter sets the maximum number of values to be returned. A zero limit value means unlimited.
If you know already that you have few distinct values, use the AGGREGATE_LIST_DISTINCT
aggregate function to build the same result in memory (that might not be enough for a large number of distinct values).
The order of elements in the result list depends on the implementation and can't be set externally. To return an ordered list, sort the result, for example, with ListSort.
To return a list of multiple values from one line, DO NOT use the AGGREGATE_LIST
function several times, but add all the needed values to a container, for example, via AsList or AsTuple, then pass this container to a single AGGREGATE_LIST
call.
For example, you can combine it with DISTINCT
and the function String::JoinFromList (it's an equivalent of ','.join(list)
in Python) to output to a string all the values found in the column after GROUP BY.
SELECT
AGGREGATE_LIST( region ),
AGGREGATE_LIST( region, 5 ),
AGGREGATE_LIST( DISTINCT region ),
AGGREGATE_LIST_DISTINCT( region ),
AGGREGATE_LIST_DISTINCT( region, 5 )
FROM users
-- An equivalent of GROUP_CONCAT in MySQL
SELECT
String::JoinFromList(CAST(AGGREGATE_LIST(region, 2) AS List<String>), ",")
FROM users
These functions also have a short notation: AGG_LIST
and AGG_LIST_DISTINCT
.
{% note alert %}
Execution is NOT lazy, so when you use it, be sure that the list has a reasonable size (about a thousand items or less). To stay on the safe side, better use a second optional numeric argument that limits the number of items in the list.
{% endnote %}
Return the value of the first argument for the table row where the second argument is minimum/maximum.
You can optionally specify the third argument N that affects behavior if the table has multiple rows with the same minimum or maximum value:
When choosing N, we recommend that you don't exceed several hundreds or thousands to avoid issues with the memory limit.
If your task needs absolutely all values, and their number is measured in dozens of thousands or more, then instead of those aggregate functions better use JOIN
on the source table with a subquery doing GROUP BY + MIN/MAX
on the desired columns of this table.
{% note warning "Attention" %}
If the second argument is always NULL
, the aggregation result is NULL
.
{% endnote %}
When you use aggregation factories, a Tuple
containing a value and a key is passed as the first AGGREGATE_BY argument.
SELECT
MIN_BY(value, LENGTH(value)),
MAX_BY(value, key, 100)
FROM my_table;
$min_by_factory = AggregationFactory("MIN_BY");
$max_by_factory = AggregationFactory("MAX_BY", 100);
SELECT
AGGREGATE_BY(AsTuple(value, LENGTH(value)), $min_by_factory),
AGGREGATE_BY(AsTuple(value, key), $max_by_factory)
FROM my_table;
Return a list of the maximum/minimum values of an expression. The first argument is an expression, the second argument limits the number of items.
SELECT
TOP(key, 3),
BOTTOM(value, 3)
FROM my_table;
$top_factory = AggregationFactory("TOP", 3);
$bottom_factory = AggregationFactory("BOTTOM", 3);
SELECT
AGGREGATE_BY(key, $top_factory),
AGGREGATE_BY(value, $bottom_factory)
FROM my_table;
Return a list of values of the first argument for the rows containing the maximum/minimum values of the second argument. The third argument limits the number of items in the list.
When you use aggregation factories, a Tuple
containing a value and a key is passed as the first AGGREGATE_BY argument. In this case, the limit for the number of items is passed by the second argument at factory creation.
SELECT
TOP_BY(value, LENGTH(value), 3),
BOTTOM_BY(value, key, 3)
FROM my_table;
$top_by_factory = AggregationFactory("TOP_BY", 3);
$bottom_by_factory = AggregationFactory("BOTTOM_BY", 3);
SELECT
AGGREGATE_BY(AsTuple(value, LENGTH(value)), $top_by_factory),
AGGREGATE_BY(AsTuple(value, key), $bottom_by_factory)
FROM my_table;
Getting an approximate list of the most common values in a column with an estimation of their count. Returns a list of structures with two fields:
Value
: the frequently occurring value that was found.Frequency
: An estimated value occurrence in the table.Required argument: the value itself.
Optional arguments:
TOPFREQ
, the desired number of items in the result. MODE
is an alias to TOPFREQ
with this argument set to 1. For TOPFREQ
, this argument is also 1 by default.SELECT
MODE(my_column),
TOPFREQ(my_column, 5, 1000)
FROM my_table;
Standard deviation and variance in a column. Those functions use a single-pass parallel algorithm, whose result may differ from the more common methods requiring two passes through the data.
By default, the sample variance and standard deviation are calculated. Several write methods are available:
POPULATION
suffix/prefix, for example: VARIANCE_POPULATION
, POPULATION_VARIANCE
calculates the variance or standard deviation for the population.SAMPLE
suffix/prefix or without a suffix, for example, VARIANCE_SAMPLE
, SAMPLE_VARIANCE
, SAMPLE
calculate sample variance and standard deviation.Several abbreviated aliases are also defined, for example, VARPOP
or STDDEVSAMP
.
If all the values passed are NULL
, it returns NULL
.
SELECT
STDDEV(numeric_column),
VARIANCE(numeric_column)
FROM my_table;
Correlation and covariance between two columns.
Abbreviated versions are also available: CORR
or COVAR
. For covariance, there are also versions with the SAMPLE
/POPULATION
suffix that are similar to VARIANCE above.
Unlike most other aggregate functions, they don't skip NULL
, but accept it as 0.
When you use aggregation factories, a Tuple
containing two values is passed as the first AGGREGATE_BY argument.
SELECT
CORRELATION(numeric_column, another_numeric_column),
COVARIANCE(numeric_column, another_numeric_column)
FROM my_table;
$corr_factory = AggregationFactory("CORRELATION");
SELECT
AGGREGATE_BY(AsTuple(numeric_column, another_numeric_column), $corr_factory)
FROM my_table;
Calculating percentiles using the amortized version of the TDigest algorithm. MEDIAN
: An alias for PERCENTILE(N, 0.5)
.
{% note info "Restriction" %}
The first argument (N) must be a table column name. If you need to bypass this restriction, use a subquery. The restriction is introduced to simplify calculations, since the implementation merges the calls with the same first argument (N) into a single pass.
{% endnote %}
SELECT
MEDIAN(numeric_column),
PERCENTILE(numeric_column, 0.99)
FROM my_table;
Plotting an approximate histogram based on a numeric expression with automatic selection of buckets.
You can limit the number of buckets using an optional argument. The default value is 100. Keep in mind that added accuracy costs you more computing resources and may negatively affect the query execution time. In extreme cases, it may affect your query success.
You can specify a "weight" for each value used in the histogram. To do this, pass to the aggregate function the second argument with an expression for calculating the weight. The weight of 1.0
is always used by default. If you use non-standard weights, you may also use the third argument to limit the number of buckets.
If you pass two arguments, the meaning of the second argument is determined by its type (if it's an integer literal, it limits the number of buckets, otherwise it's used as a weight).
Various modifications of the algorithm are available:
AdaptiveDistanceHistogram
AdaptiveWeightHistogram
AdaptiveWardHistogram
BlockWeightHistogram
BlockWardHistogram
By default, HISTOGRAM
is a synonym for AdaptiveWardHistogram
. Both functions are equivalent and interchangeable in all contexts.
The Distance, Weight, and Ward algorithms differ in the formulas that combine two points into one:
TWeightedValue CalcDistanceQuality(const TWeightedValue& left, const TWeightedValue& right) {
return TWeightedValue(right.first - left.first, left.first);
}
TWeightedValue CalcWeightQuality(const TWeightedValue& left, const TWeightedValue& right) {
return TWeightedValue(right.second + left.second, left.first);
}
TWeightedValue CalcWardQuality(const TWeightedValue& left, const TWeightedValue& right) {
const double N1 = left.second;
const double N2 = right.second;
const double mu1 = left.first;
const double mu2 = right.first;
return TWeightedValue(N1 * N2 / (N1 + N2) * (mu1 - mu2) * (mu1 - mu2), left.first);
}
Difference between Adaptive and Block:
{% block info %}
Contrary to adaptive histogram, block histogram doesn't rebuild bins after each point is added. Instead, it accumulates points and if the amount of points overflows specified limits, it shrinks all the points at once to produce a histogram. Indeed, there exist two limits and two shrinkage operations:
{% endblock %}
When you use aggregation factories, a Tuple
containing a value and a weight is passed as the first AGGREGATE_BY argument.
SELECT
HISTOGRAM(numeric_column)
FROM my_table;
SELECT
Histogram::Print(
HISTOGRAM(numeric_column, 10),
50
)
FROM my_table;
$hist_factory = AggregationFactory("HISTOGRAM");
SELECT
AGGREGATE_BY(AsTuple(numeric_column, 1.0), $hist_factory)
FROM my_table;
Plotting a histogram based on an explicitly specified fixed bucket scale.
Arguments:
LinearHistogram
buckets or the logarithm base for LogarithmicHistogram
/LogHistogram
(those are aliases). In both cases, the default value is 10.The format of the result is totally similar to adaptive histograms, so you can use the same set of auxiliary functions.
If the spread of input values is uncontrollably large, we recommend that you specify the minimum and maximum values to prevent potential failures due to high memory consumption.
SELECT
LogarithmicHistogram(numeric_column, 2)
FROM my_table;
BOOL_AND(Bool?)->Bool?
BOOL_OR(Bool?)->Bool?
BOOL_XOR(Bool?)->Bool?
Apply the relevant logical operation (AND
/OR
/XOR
) to all values in a Boolean column or expression.
Unlike most other aggregate functions, these functions don't skip NULL
during aggregation and use the following rules:
true AND null == null
false OR null == null
For BOOL_AND
:
NULL
value is present, the result is NULL
regardless of true
values in the expression.false
value is present, the result changes to false
regardless of NULL
values in the expression.For BOOL_OR
:
NULL
value is present, the result changes to NULL
regardless of false
values in the expression.true
value is present, the result changes to true
regardless of NULL
values in the expression.For BOOL_XOR
:
NULL
if any NULL
is found.Examples of such behavior can be found below.
To skip NULL
values during aggregation, use the MIN
/MAX
or BIT_AND
/BIT_OR
/BIT_XOR
functions.
$data = [
<|nonNull: true, nonFalse: true, nonTrue: NULL, anyVal: true|>,
<|nonNull: false, nonFalse: NULL, nonTrue: NULL, anyVal: NULL|>,
<|nonNull: false, nonFalse: NULL, nonTrue: false, anyVal: false|>,
];
SELECT
BOOL_AND(nonNull) as nonNullAnd, -- false
BOOL_AND(nonFalse) as nonFalseAnd, -- NULL
BOOL_AND(nonTrue) as nonTrueAnd, -- false
BOOL_AND(anyVal) as anyAnd, -- false
BOOL_OR(nonNull) as nonNullOr, -- true
BOOL_OR(nonFalse) as nonFalseOr, -- true
BOOL_OR(nonTrue) as nonTrueOr, -- NULL
BOOL_OR(anyVal) as anyOr, -- true
BOOL_XOR(nonNull) as nonNullXor, -- true
BOOL_XOR(nonFalse) as nonFalseXor, -- NULL
BOOL_XOR(nonTrue) as nonTrueXor, -- NULL
BOOL_XOR(anyVal) as anyXor, -- NULL
FROM AS_TABLE($data);
Apply the relevant bitwise operation to all values of a numeric column or expression.
SELECT
BIT_XOR(unsigned_numeric_value)
FROM my_table;
No arguments. It's allowed only if there is SessionWindow in GROUP BY / PARTITION BY.
Returns the value of the SessionWindow
key column. If SessionWindow
has two arguments, it returns the minimum value of the first argument within the group/section.
In the case of the expanded version SessionWindow
, it returns the value of the second element from the tuple returned by <calculate_lambda>
, for which the first tuple element is True
.
Applying an aggregation factory to all values of a column or expression. The MULTI_AGGREGATE_BY
function requires that the value of a column or expression has a structure, tuple, or list, and applies the factory to each individual element, placing the result in a container of the same format. If different values of a column or expression contain lists of different length, the resulting list will have the smallest of the source lengths.
DISTINCT
column or expression.$count_factory = AggregationFactory("COUNT");
SELECT
AGGREGATE_BY(DISTINCT column, $count_factory) as uniq_count
FROM my_table;
SELECT
MULTI_AGGREGATE_BY(nums, AggregationFactory("count")) as count,
MULTI_AGGREGATE_BY(nums, AggregationFactory("min")) as min,
MULTI_AGGREGATE_BY(nums, AggregationFactory("max")) as max,
MULTI_AGGREGATE_BY(nums, AggregationFactory("avg")) as avg,
MULTI_AGGREGATE_BY(nums, AggregationFactory("percentile", 0.9)) as p90
FROM my_table;