throttling.rst 4.6 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154
  1. Throttles and Rate Limiting
  2. ===========================
  3. With the way Sentry works you may find yourself in a situation where
  4. you'll see too much inbound traffic without a good way to drop excess
  5. messages. There's a few solutions to this, and you'll likely want to
  6. employ them all if you are faced with this problem.
  7. Event Quotas
  8. ------------
  9. One of the primary mechanisms for throttling workloads in Sentry involves
  10. setting up event quotas. These can be configured per project as well as
  11. system wide and will allow you to limit the maximum number of events
  12. accepted within a 60 second period of time.
  13. Configuration
  14. `````````````
  15. The primary implementation uses Redis, and simply requires you to configure
  16. the connection information:
  17. .. code-block:: python
  18. SENTRY_QUOTAS = 'sentry.quotas.redis.RedisQuota'
  19. SENTRY_QUOTA_OPTIONS = {
  20. 'hosts': {
  21. 0: {
  22. 'host': 'localhost',
  23. 'port': 6379
  24. }
  25. }
  26. }
  27. You also have the ability to specify multiple nodes and have keys automatically
  28. distributed. It's unlikely that you'll need this functionality, but if you do, a simple
  29. configuration might look like this:
  30. .. code-block:: python
  31. SENTRY_QUOTA_OPTIONS = {
  32. 'hosts': {
  33. 0: {
  34. 'host': '192.168.1.1'
  35. },
  36. 1: {
  37. 'host': '192.168.1.2'
  38. }
  39. },
  40. }
  41. You can also configure system-wide maximums, and a default value for all projects:
  42. .. code-block:: python
  43. SENTRY_DEFAULT_MAX_EVENTS_PER_MINUTE = '90%'
  44. SENTRY_SYSTEM_MAX_EVENTS_PER_MINUTE = 500
  45. If you have additional needs, you're freely available to extend the base
  46. Quota class just as the Redis implementation does.
  47. Notification Rate Limits
  48. ------------------------
  49. In some cases there may be concerns about limiting things such as outbound email
  50. notifications. To address this Sentry provides a rate limits subsystem which supports
  51. arbitrary rate limits.
  52. Configuration
  53. `````````````
  54. Like event quotas, the primary implementation uses Redis:
  55. .. code-block:: python
  56. SENTRY_RATELIMITER = 'sentry.ratelimits.redis.RedisRateLimiter'
  57. SENTRY_RATELIMITER_OPTIONS = {
  58. 'hosts': {
  59. 0: {
  60. 'host': 'localhost',
  61. 'port': 6379
  62. }
  63. }
  64. }
  65. Rate Limiting with IPTables
  66. ---------------------------
  67. One of your most effective options is to rate limit with your system's
  68. firewall, in our case, IPTables. If you're not sure how IPTables works,
  69. take a look at `Ubuntu's IPTables How-to
  70. <https://help.ubuntu.com/community/IptablesHowTo>`_.
  71. A sample configuration, which will limit a single IP from bursting more
  72. than 5 messages in a 10 second period might look like this::
  73. # create a new chain for rate limiting
  74. -N LIMITED
  75. # rate limit individual ips to prevent stupidity
  76. -I INPUT -p tcp --dport 80 -m state --state NEW -m recent --set
  77. -I INPUT -p tcp --dport 443 -m state --state NEW -m recent --set
  78. -I INPUT -p tcp --dport 80 -m state --state NEW -m recent --update --seconds 10 --hitcount 5 -j LIMITED
  79. -I INPUT -p tcp --dport 443 -m state --state NEW -m recent --update --seconds 10 --hitcount 5 -j LIMITED
  80. # log rejected ips
  81. -A LIMITED -p tcp -m limit --limit 5/min -j LOG --log-prefix "Rejected TCP: " --log-level 7
  82. -A LIMITED -j REJECT
  83. Rate Limiting with Nginx
  84. ------------------------
  85. While IPTables will help prevent DDOS they don't effectively communicate
  86. to the client that it's being rate limited. This can be important
  87. depending on how the client chooses to respond to the situation.
  88. An alternative (or rather, an addition) is to use something like
  89. `ngx_http_limit_conn_module
  90. <http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html>`_.
  91. An example configuration looks something like this::
  92. limit_req_zone $binary_remote_addr zone=one:100m rate=3r/s;
  93. limit_req_zone $projectid zone=two:100m rate=6r/s;
  94. limit_req_status 429;
  95. limit_req_log_level warn;
  96. server {
  97. listen 80;
  98. location / {
  99. proxy_pass http://internal;
  100. }
  101. location ~* /api/(?P<projectid>\d+/)?store/ {
  102. proxy_pass http://internal;
  103. limit_req zone=one burst=3 nodelay;
  104. limit_req zone=two burst=10 nodelay;
  105. }
  106. }
  107. Using Cyclops (Client Proxy)
  108. ----------------------------
  109. An additional option for rate limiting is to do it on the client side.
  110. `Cyclops <https://github.com/heynemann/cyclops>`_ is a third-party proxy
  111. written in Python (using Tornado) which aims to solve this.
  112. It's not officially supported, however it is used in production by several
  113. large users.