高的Mysql连接

有一个挑战,发现为什么结果是如此。 也许错过了一些明显的 对不起,这不是很具体。 但如果任何人有一个领域的重点,这将是非常有益的。 干杯。

负载testing

其大约5486分钟写入分钟/每秒90。 当服务器变得不堪重负时,我可以在日志中看到以下错误:

  • 11:资源暂时不可用),同时连接到上游
  • 在从上游读取响应报头时,上游超时(110:连接超时)

问题

运行负载testing时,请参阅以下问题:

  • 执行更新/写入(负载testing正在进行的那个)的页面变慢,需要10 20秒来加载。
  • Nginx在任何页面上给出任意的404。 结果显示,在高峰期可能有10-20%的请求导致404。

我认为他们是两个不同的问题,可能无关。 在图表中我看不到任何平坦的线条,这意味着达到了极限。

  • Web服务器占用60%的CPU并保持稳定。 RAM看起来不错。
  • 数据库服务器大约占CPU的20%,并保持稳定。 RAM看起来不错。
  • 数据库连接转到1500/2000。 这看起来如果。 虽然它不是平线,这表明它没有达到极限。
  • networking连接限制似乎是确定的。
  • 索引表在可能/适当的情况下。

基础设施

AWS RDS MySQL 1 x db.m3.xlarge写入操作1 x db.m3.xlarge用于读取操作的复制数据块

AWS EC2 Web服务器Linux,Nginx,PHP-FPM 6 x c3.2xlarge

configuration

/etc/php-fpm.d/domain.com.conf

[domain.com]

user = nginx group = nginx ;;;The address on which to accept FastCGI requests listen = /var/run/php-fpm/domain.com.sock ;;;A value of '-1' means unlimited. Althought this may be based on ulimit hard limit. ;;;May be worth setting as desired in case of the above. listen.backlog = -1 ;;;dynamic - the number of child processes is set dynamically based on the following directives: pm.max_children, pm.start_servers, pm.min_spare_servers, pm.max_spare_servers. pm = dynamic ;;;maximum number of child processes to be created when pm is set to dynamic pm.max_children = 512 ;;;The number of child processes created on startup. Used only when pm is set to dynamic. pm.start_servers = 8 ;;;The desired minimum number of idle server processes. Used only when pm is set to dynamic. pm.min_spare_servers = 2 The desired maximum number of idle server processes. Used only when pm is set to dynamic pm.max_spare_servers = 16 ;;;The number of requests each child process should execute before respawning. pm.max_requests = 500 ;;;The URI to view the FPM status page. pm.status_path = /status/fpm/domain.com ;;;The timeout for serving a single request. This option should be used when the 'max_execution_time' ini option does not stop script execution request_terminate_timeout = 30 ;;;Set open file descriptor rlimit. Default value: system defined value. ;;;rlimit_files ;;;rlimit_core int ;;;Set max core size rlimit. Possible Values: 'unlimited' or an integer greater or equal to 0. Default value: system defined value. php_admin_value[post_max_size] = 8M php_admin_value[upload_max_filesize] = 8M php_admin_value[disable_functions] = exec,passthru,system,proc_open,popen,show_source ;;; Site specific custom flags go here ;;; End of site specific flags slowlog = /var/log/nginx/slow-query-$pool.log request_slowlog_timeout = 10s chdir = / 

Nginx – /etc/nginx/nginx.conf

 events { worker_connections 19000; # essential for linux, optmized to serve many clients with each thread use epoll; multi_accept on; } worker_rlimit_nofile 20000; http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format proxy_combined '$http_x_real_ip - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" "$http_user_agent"'; access_log /var/log/nginx/access.log proxy_combined; sendfile on; ## Start: Size Limits & Buffer Overflows ## client_body_buffer_size 1K; client_header_buffer_size 1k; # client_max_body_size 1k; large_client_header_buffers 2 1k; ## END: Size Limits & Buffer Overflows ## ## Start: Caching file descriptors ## open_file_cache max=1000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; ## END: Caching ## Start: Timeouts ## client_body_timeout 10; client_header_timeout 10; keepalive_timeout 5 5; send_timeout 10; ## End: Timeouts ## server_tokens off; tcp_nodelay on; gzip on; gzip_http_version 1.1; gzip_vary on; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain text/html text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js; gzip_buffers 16 8k; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; client_max_body_size 30M; proxy_cache_path /var/cache/nginx/c2b levels=1:2 keys_zone=c2b-cache:8m max_size=100m inactive=60m; proxy_temp_path /var/cache/tmp; proxy_ignore_headers Set-Cookie X-Accel-Expires Expires Cache-Control; # allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on; include /etc/nginx/conf.d/*.conf; } 

NGINX网站特定 – /etc/nginx/conf.d/domain.com

 # pass the PHP scripts to FastCGI server listening on location ~ \.php$ { fastcgi_pass unix:/var/run/php-fpm/domain.com.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/domain.com/public_html/$fastcgi_script_name; include fastcgi_params; fastcgi_read_timeout 30; } 

我深究这个问题的症结所在。 我把MyISAM中的MySQL数据库表改为Innodb(如果你使用全文search的话,我认为它可以填满)。

这里有一点 –

MyISAM表locking问题

可以通过Google快速find更多信息

这已经解决了。 现在每分钟可以看到约6万个成功的连接。