index.rst 4.5 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143
  1. Performance Tuning
  2. ==================
  3. This document describes a set of best practices which may help you squeeze more performance out of various Sentry configurations.
  4. Redis
  5. -----
  6. **Ensure you're using at least Redis 2.4**
  7. All Redis usage in Sentry is temporal, which means the append-log/fsync models in Redis do not need to apply.
  8. With that in mind, we recommend the following changes to (some) default configurations:
  9. - Disable saving by removing all ``save XXXX`` lines.
  10. - Set ``maxclients 0`` to remove connection limitations.
  11. - Set ``maxmemory-policy allkeys-lru`` to aggressively prune all keys.
  12. - Set ``maxmemory 1gb`` to a reasonable allowance.
  13. Web Server
  14. ----------
  15. Switching off of the default Sentry worker model and to uWSGI + emperor mode can yield very good results.
  16. If you're using supervisord, you can easily implement emperor mode and uWSGI yourself by doing something along the lines of:
  17. ::
  18. [program:web]
  19. command=newrelic-admin run-program /srv/www/getsentry.com/env/bin/uwsgi -s 127.0.0.1:90%(process_num)02d --log-x-forwarded-for --buffer-size 32768 --post-buffering 65536 --need-app --disable-logging --wsgi-file getsentry/wsgi.py --processes 1 --threads 6
  20. process_name=%(program_name)s_%(process_num)02d
  21. numprocs=20
  22. numprocs_start=0
  23. startsecs=5
  24. startretries=3
  25. stopsignal=QUIT
  26. stopwaitsecs=10
  27. stopasgroup=true
  28. killasgroup=true
  29. environment=SENTRY_CONF="/srv/www/getsentry.com/current/getsentry/settings.py"
  30. directory=/srv/www/getsentry.com/current/
  31. Once you're running multiple processes, you'll of course need to also configure something like Nginx to load balance to them:
  32. ::
  33. upstream internal {
  34. least_conn;
  35. server 127.0.0.1:9000;
  36. server 127.0.0.1:9001;
  37. server 127.0.0.1:9002;
  38. server 127.0.0.1:9003;
  39. server 127.0.0.1:9004;
  40. server 127.0.0.1:9005;
  41. server 127.0.0.1:9006;
  42. server 127.0.0.1:9007;
  43. server 127.0.0.1:9008;
  44. server 127.0.0.1:9009;
  45. server 127.0.0.1:9010;
  46. server 127.0.0.1:9011;
  47. server 127.0.0.1:9012;
  48. server 127.0.0.1:9013;
  49. server 127.0.0.1:9014;
  50. server 127.0.0.1:9015;
  51. server 127.0.0.1:9016;
  52. server 127.0.0.1:9017;
  53. server 127.0.0.1:9018;
  54. server 127.0.0.1:9019;
  55. }
  56. server {
  57. listen 80;
  58. server_name sentry.example.com;
  59. # keepalive + raven.js is a disaster
  60. keepalive_timeout 0;
  61. # use very aggressive timeouts
  62. proxy_read_timeout 5s;
  63. proxy_send_timeout 5s;
  64. send_timeout 5s;
  65. resolver_timeout 5s;
  66. client_body_timeout 5s;
  67. # buffer larger messages
  68. client_max_body_size 150k;
  69. client_body_buffer_size 150k;
  70. location / {
  71. uwsgi_pass internal;
  72. uwsgi_param Host $host;
  73. uwsgi_param X-Real-IP $remote_addr;
  74. uwsgi_param X-Forwarded-For $proxy_add_x_forwarded_for;
  75. uwsgi_param X-Forwarded-Proto $http_x_forwarded_proto;
  76. include uwsgi_params;
  77. }
  78. }
  79. See uWSGI's official documentation for emperor mode details.
  80. Celery
  81. ------
  82. Celery can be difficult to tune. Your goal is to maximize the CPU usage without running out of memory. If you have JavaScript clients this becomes more difficult, as currently the sourcemap and context scraping can buffer large amounts of memory depending on your configurations and the size of your source files.
  83. On a completely anecdotal note, you can take the same approach that you might take with improving the webserver: spam more processes. We again look to supervisord for managing this for us:
  84. ::
  85. [program:celeryd]
  86. command=/srv/www/getsentry.com/env/bin/sentry celery worker -c 6 -P processes -l WARNING -n worker-%(process_num)02d.worker-3
  87. process_name=%(program_name)s_%(process_num)02d
  88. numprocs=16
  89. numprocs_start=0
  90. startsecs=1
  91. startretries=3
  92. stopsignal=TERM
  93. stopwaitsecs=10
  94. stopasgroup=false
  95. killasgroup=true
  96. environment=SENTRY_CONF="/srv/www/getsentry.com/current/getsentry/settings.py"
  97. directory=/srv/www/getsentry.com/current/
  98. Monitoring Memory
  99. -----------------
  100. There are cases where Sentry currently buffers large amounts of memory. This may depend on the client (javascript vs python) as well as the size of your events. If you repeatedly run into issues where workers or web nodes are using a lot of memory, you'll want to ensure you have some mechanisms for monitoring and resolving this.
  101. If you're using supervisord, we recommend taking a look at `superlance <http://superlance.readthedocs.org>`_ which aids in this situation:
  102. ::
  103. [eventlistener:memmon]
  104. command=memmon -a 400MB -m ops@example.com
  105. events=TICK_60