Solaris Web Stack Optimizations
I have finally nailed out all our issues surrounding Varnish on Solaris, thanks to the help of sky from #varnish. Apparently Varnish uses a wrapper around connect() to drop stale connections to avoid thread pileups if the back-end ever dies. Setting connect_timeout to 0 will force Varnish to use connect() directly. This should eliminate all 503 back-end issues under Solaris that I have mentioned in an earlier blog post.
Here is our startup script for varnish that works for our needs. Varnish is a 64-bit binary hence the “-m64” cc_command passed.
#!/bin/sh
rm /sessions/varnish_cache.bin
newtask -p highfile /opt/extra/sbin/varnishd -f /opt/extra/etc/varnish/default.vcl -a 72.11.142.91:80 -p listen_depth=8192 -p thread_pool_max=2000 -p thread_pool_min=12 -p thread_pools=4 -p cc_command='cc -Kpic -G -m64 -o %o %s' -s file,/sessions/varnish_cache.bin,4G -p sess_timeout=10s -p max_restarts=12 -p session_linger=50s -p connect_timeout=0s -p obj_workspace=16384 -p sess_workspace=32768 -T 0.0.0.0:8086 -u webservd -F
I noticed varnish had particular problem of keeping connections around in CLOSE_WAIT state for a long time, enough to cause issues. I did some tuning on Solaris’s TCP stack so it is more aggressive in closing sockets after the work has been done.
Here are my aggressive TCP settings to force Solaris to close off connections in a short duration of time, to avoid file descriptor leaks. You can merge the following TCP tweaks with the settings I have posted earlier to handle more clients.
# 67 seconds default 675 seconds
/usr/sbin/ndd -set /dev/tcp tcp_fin_wait_2_flush_interval 67500
# 30 seconds, aggressively close connections - default 4 minutes on solaris < 8 /usr/sbin/ndd -set /dev/tcp tcp_time_wait_interval 30000 # 1 minute, poll for dead connection - default 2 hours /usr/sbin/ndd -set /dev/tcp tcp_keepalive_interval 60000
Last but not least, I have finally swapped out ActiveMQ for the FUSE message broker, an "enterprise" ActiveMQ distribution. Hopefully it won't crash once a week like ActiveMQ does for us. The FUSE message broker is based off of ActiveMQ 5.3 sources that fix various memory leaks found in the current stable release of ActiveMQ 5.2 as of this writing.
If the FUSE message broker does not work out, I might have to give Kestrel a try. Hey, if it worked for twitter, it should work for us...right?
hi there, thanks for the post. You didn’t mention what version of varnish you finally got running and if/how you compiled it.
I’ve given varnish (2.0.1 – 2.0.4) several tries on Solaris10 and OpenSolaris and it still doesn’t work as expected. I’ve seen crashes, 503s from backend and other issues and was able to work around those, but the “last” thing I’m stuck with is intermittent latency issues. One out of every 10 or so requests takes “forever” (5-15sec) to fulfill even when it is a cache hit. Have you dealt with that issue?
thanks,
Igor
Igor,
Yes I have dealt with the issues you have described. For 503 requests, switching varnish to connect_timeout 0 solves the issue. Concerning requests taking forever, there is only one work around for this; place nginx in front to fail over to backend when this happens.
thanks for the answer. the solution is not quite what I wanted to hear, but at least I know that I’m not alone with this issue.
Well, at least there is squid. Squid-2.7 runs *perfect* on solaris. Nginx+Squid == Varnish
I use the combo to get the same results that I do with varnish.
I’m afraid that I can’t achieve the same caching strategy with squid as I can with a nifty vlc for varnish.
I experienced several other weird issues with varnish on solaris that just don’t let me trust it. I’m considering using opensolaris + xen + linux in DomU + varnish or abandoning varnish altogether.
I had lots of problems with the 503 as well more on the backend like ffmpeg coding and waiting for stuff. A temp solution that seem to work was this one:
.first_byte_timeout = 3000s;
The thing with the 503 is that when it waits for the first byte to arrive and nothing happens most of the time php is working and it wont give anything back until it finishes. So varnish thinks the server is down. So making it wait more for the first byte it worked quite well for us.
@Rodrigo
Your looking at the problem wrong, ffmpeg encoding should not tie up a request, it should be an asynchronous process. Consider rewriting your solution to background ffmpeg encoding. The 503 errors I experience has to do with the TCP “tricks” Varnish does that are not solaris friendly.
I totaly agree I have the FFMPEG on background but, I have to take the images out of the video and I do that without background because of some internal issues, then I send it to cue.
But it was an example it can be many others like mailing list or some big query it can apply to anything.