FasterCGI with HHVM

Posted on December 17, 2013 by

Today, we are happy to announce FastCGI support for HHVM. FastCGI is a popular protocol for communication between an application server (e.g. running your PHP code) and a webserver. With support for FastCGI, you will be able to run HHVM behind any popular web server (Apache, Nginx, Lighttpd, etc). The webserver is in charge of handling all the intricate details of the HTTP protocol. HHVM is in charge of what it does best, running PHP code blazingly fast.

If you can’t wait to get your hands on the new feature, just jump to the Installation section. If you are curious about how the new feature was baked into HHVM or just want to learn how well it performs, read on.

How it works

FastCGI was designed to solve one crucial problem that plagued its predecessor CGI, namely: performance. The CGI protocol required that a new instance of the application be spawned on every request. Such a trade-off could have been tolerable for small native programs where start-up overhead was negligible. It is however incredibly difficult for a JIT compiler such as HHVM. The HipHop virtual machine keeps track of the code that ran in the past and reuses bytecode output whenever it sees a known file. HHVM also performs just-in-time compilation, i.e. translates snippets of the bytecode into native machine code when they are run frequently. Both of these features require that an instance of HHVM runs in server mode and lives on from request to request. FastCGI protocol enables that. Moreover, FastCGI can be configured to reuse the same connection for serving multiple requests. This involves serving both requests coming one after another and serving multiple requests in parallel. In the latter case data from multiple requests is multiplexed on a single connection.

HHVM FastCGI server uses asynchronous I/O. This means that an I/O thread will never block waiting on a single connection. Instead, system routines such as select() are used to monitor activity on multiple connections at once. When activity is detected (e.g. file descriptor becomes ready for reading or writing), an I/O thread executes the appropriate non-blocking action. This completes a single cycle of the event loop. This approach maximizes CPU use when serving I/O. Consequently, the threads spend only as much time blocked as absolutely necessary. I/O is served using multiple threads. Once the server is under heavy load, there will roughly be one I/O thread per CPU core. The incoming connections are then distributed evenly between the I/O threads using a round robin. As a result, operations on a single connection are always single-threaded and no additional synchronisation is needed.

A separate set of worker threads is used to execute PHP code. All the (partially) decoded requests are put inside a common concurrent queue. In a loop, each worker thread then pops a new request to be served. At this point, all the request headers are ready. For small requests, the request body will also typically be available. When a request is large (e.g. is a file upload), the worker thread might block waiting on more I/O to become available. The payload can be stored to a file in order to keep memory usage low. Once all the request data becomes available, PHP execution may begin. The output produced by a PHP script is buffered and sent to an I/O thread for further handling.

Performance

We conducted benchmarks using Nginx. In the first test we ran a WordPress instance. The second test showcases HHVM performance for computation-intensive tasks. We used a simple php script computing Fibonacci numbers.

WordPress

The first test was performed using the WordPress example page. No changes were made to the wordpress deployment besides filling in mandatory database connection fields. Nginx was used as a web server. Apache bench was used for load testing using this command:

ab -c 50 -n 1000 http://localhost/wordpress/wordpress/?p=1

PHP-FPM

Sadly the results were pretty bad for PHP. Only 23 requests per second seem to indicate that WordPress is computationally very expensive to run in an interpreter.

Requests per second: 23.17 [#/sec] (mean)
Time per request:    2157.579 [ms] (mean)
Time per request:    43.152 [ms] (mean, across all concurrent requests)
Transfer rate:       275.42 [Kbytes/sec] received

HHVM FastCGI

On a cold start, HHVM still performed much better than PHP-FPM; however, there was a clear penalty associated with the initial JITing of PHP files:

Requests per second: 184.71 [#/sec] (mean)
Time per request:    270.689 [ms] (mean)
Time per request:    5.414 [ms] (mean, across all concurrent requests)
Transfer rate:       2194.38 [Kbytes/sec] received

After the initial warm-up the results got noticeably better:

Requests per second: 949.21 [#/sec] (mean)
Time per request:    52.676 [ms] (mean)
Time per request:    1.054 [ms] (mean, across all concurrent requests)
Transfer rate:       11276.46 [Kbytes/sec] received

That’s a surprisingly good result of roughly 40x higher throughput than the default PHP-FPM setting.

Fibonacci

Next we decided to compare the two runtimes using a computationally expensive example. We wrote a simple function computing Fibonacci numbers using an exponential algorithm. Here are the results for computing Fib(N) for value N = 5, 15, 25, 30:

PHP-FPM

Requests per second: 13789.24 [#/sec] (mean) Fib(5)
Requests per second: 3202.31 [#/sec] (mean)  Fib(15)
Requests per second: 118.94 [#/sec] (mean)   Fib(25)*
Requests per second: 8.40 [#/sec] (mean)     Fib(30)**

*only 1000 requests were performed to save time

**only 100 requests were performed to save time

HHVM FastCGI

Requests per second: 8842.70 [#/sec] (mean) Fib(5)
Requests per second: 8892.66 [#/sec] (mean) Fib(15)
Requests per second: 5581.37 [#/sec] (mean) Fib(25)
Requests per second: 737.56 [#/sec] (mean)  Fib(30)

In case where N = 5, the page required almost no computation at all. In this case it was visible just how well optimized FPM really is. It was over 50% faster than HHVM FastCGI server which indicates that there is still a lot of room for improvement in our implementation. However at N = 15 the amount of computation already shifted the balance to HHVM’s advantage. Where for HHVM the bottleneck was still the network, FPM was clearly CPU bound at this point with a result almost 3x worse than HHVM. At N = 30, HHVM just in time compilation really shined, yielding results almost 80x faster than PHP.

Installation

Now to the fun part: how to get HHVM FastCGI working on your machine? For popular Debian-based distros we prepared a set of pre-built packages:

Ubuntu 12.04

echo deb http://dl.hhvm.com/ubuntu precise main | sudo tee /etc/apt/sources.list.d/hhvm.list
sudo apt-get update
sudo apt-get install hhvm

Ubuntu 13.10

echo deb http://dl.hhvm.com/ubuntu saucy main | sudo tee /etc/apt/sources.list.d/hhvm.list
sudo apt-get update
sudo apt-get install hhvm

Debian 7

echo deb http://dl.hhvm.com/debian wheezy main | sudo tee /etc/apt/sources.list.d/hhvm.list
sudo apt-get update
sudo apt-get install hhvm

Other

If your distro is not on the list, you can still run HHVM FastCGI, with slightly more work (or you can take the scripts from that package and repackage it for your distro). First, install the latest release of HHVM. To run the server in FastCGI mode, pass additional parameters to HHVM runtime:

cd /path/to/your/www/root
hhvm --mode server -vServer.Type=fastcgi -vServer.Port=9000

The server will now accept connections on localhost:9000 (only TCP sockets are supported for now). To turn the server into a daemon, change the value of mode to:

hhvm --mode daemon -vServer.Type=fastcgi -vServer.Port=9000

All the usual options accepted by the HHVM runtime are also available in the FastCGI mode. Please make sure that HHVM runs from the directory where you wish to serve PHP files.

Once the HHVM FastCGI server is up and running, it’s time to appropriately configure your webserver of choice. As an example, we include instructions for Apache and Nginx.

Making it work with Apache

The recommended way of integrating with Apache is using the mod_proxy_fcgi module. First, you need to enable mod_proxy and mod_proxy_fcgi modules, if you haven’t done so already. To enable the modules make sure you have the following symlinks created:

cd /path/to/your/apache/conf
ln -s ../mods-available/mod_proxy.load mods-enabled/mod_proxy.load
ln -s ../mods-available/mod_proxy.conf mods-enabled/mod_proxy.conf
ln -s ../mods-available/mod_proxy_fcgi.load mods-enabled/mod_proxy_fcgi.load

Next up, you need to insert a directive instructing Apache to send traffic to the FastCGI server. You can do so by inserting the following line in your apache.conf / httpd.conf file.

ProxyPass / fcgi://127.0.0.1:9000/path/to/your/www/root/goes/here/

Please note that this will route all the traffic to the FastCGI server. If you want to route only certain requests (e.g. only those from a subdirectory or ending *.php, you can use ProxyPassMatch, e.g.

ProxyPassMatch ^/(.*.php(/.*)?)$ fcgi://127.0.0.1:9000/path/to/your/www/root/goes/here/$1

Consult Apache docs for more details on how to use ProxyPass and ProxyPassMatch.

Making it work with Nginx

The default FastCGI configuration from Nginx should work just fine with HHVM FastCGI. You will need to add the following directives inside one of your location directives:

root /path/to/your/www/root/goes/here;
fastcgi_pass   127.0.0.1:9000;
fastcgi_index  index.php;
fastcgi_param  SCRIPT_FILENAME /path/to/your/www/root/goes/here$fastcgi_script_name;
include        fastcgi_params;

The traffic for the surrounding location directive will then be routed to HHVM.

What’s next?

There are a couple of important features still missing from the initial implementation. The most notable is support for local UNIX sockets. At present, only less performant (however more flexible) TCP sockets are available. The priorities for this release were:

  • Correctness,
  • Getting the architecture of the solution right, so that performance can be improved in the future.

We haven’t really done any profiling on the new server yet. It is therefore quite likely that there is some low hanging fruit waiting to be picked. We are definitely looking forward to improving HHVM FastCGI performance in the future releases.

Posted in Announcement | Leave a reply

109 Responses to “FasterCGI with HHVM”

  1. Robert Lidberg says:

    This is so useful! Thanks!
    Will benchmark for comparison later this week.

  2. Joseph Scott says:

    Did the PHP-FPM tests use an opcode cache? Which version of PHP did the PHP-FPM tests use?

    • iwankgb says:

      I would love to know it too. Another interesting comparison could include hhvm+fastcgi+webserver and hhvm on its own.

      • Paul Tarjan says:

        Juliusz is currently back home in Poland and fast asleep. I’ll get him to reply as soon as he gets up.

        If you want to re-create the benchmarks I’ll happily link to your post.

      • Julius Kopczewski says:

        So, yea, that was the first thing I checked after I run the results the first time. It was enabled in php.ini.

        After giving it some thought, the only other thing that could possibly influence the results would be the compilation flags. I followed the official building from source instructions, however I could have screwed it up obviously.

        Please, feel free to roll your own benchmarks and we will link to them from the blog post.

      • Julius Kopczewski says:

        I ran HHVM as a webserver too. It is quite capable, since it’s what runs FB in production. In terms of I/O performance it seemed to do just slightly better than PHP-FPM + nginx. By that I mean that for “hello world” sites it would deliver about 10-15% better throughput.

  3. klausi says:

    “The following php.ini and php-fpm.ini files were used. ” ==> where are those files?

    Did you run the PHP-FPM benchmark without an opcode cache? Then the results are broing, please repeat with APC enabled and clarify that in the post.

    • Julius Kopczewski says:

      Heh, yea. I wanted to post the files, then it turned out we don’t have a good thing for posting the files so I forgot to do that.

      Let me tell what I did instead. I used the default php.ini file that came along with the PHP version that was released around September. I then made sure that opcode cache is enabled and it was. I remember retrying with it enabled explicitly.

  4. Sandeep says:

    Thanks for the update. Benchmarks are good and exiting. Could you please provide the php and hhvm configurations you ran the test?? Anyway hhvm is promising.

    • Julius Kopczewski says:

      php.ini file was the one that comes bundled with PHP sources. I just made sure that opcode cache is enabled. I did not make any tweaks to the HHVM configuration so I believe it would be equivalent to what comes with the OSS sources.

  5. Nick says:

    I notice that in your examples, the HHVM appears to be serving one single site – will fcgi work across multiple virtual hosts, or will it require a single instance per?

    I know the SCRIPT_FILENAME can be changed in nginx to not use a hardcoded filename

  6. […] We released a new version of HHVM today. This one includes all the hard work from our lockdown (detailed post to follow) and the ability to use HHVM with FastCGI. […]

  7. Nick says:

    Awesome!

    Huge shout out to you guys, this is pretty awesome! I’ve fiddled with HHVM before, but I’m super excited about this! Thanks!

  8. Oskar Hane says:

    This is great, thanks guys!
    Can’t wait to try this.

  9. Simon says:

    This looks very promising, looking forward to undertaking some performance benchmarks myself :)

  10. I´ll try in plesk installation, do you test this ?

  11. N1 says:

    Nice, got my symfony2 app running with hhvm-fastcgi :-) Only missing piece for full support seems to be Intl/ICU.

  12. Robin says:

    Will you be able to provide a working vagrant box (or something like that) so the complete config is available as a working set somewhere?) This would more easily allow other projects to have a running start into checking hhvm compat as well as looking at a complete working example of the config+hhvm+project etc!

    Looking forward to it!

  13. Josh Koenig says:

    Getting 20 req/sec from a simple WP install sounds like there was no opcode cache enabled.

    If so, these benchmarks aren’t very meaningful. :

  14. Wolf says:

    I am getting the following error on CentOS. Does anyone know what might be causing it?

    upstream sent unsupported FastCGI protocol version: 72 while reading response header from upstream

    I am running hhvm as a daemon on port 9000 and copied the NGinx config here.

  15. Miguel Clara says:

    I was tried to test this with nginx, but can’t get past “Not found”

    location ~ .php$ {
    fastcgi_param SCRIPT_FILENAME /var/www/blog$fastcgi_script_name;
    fastcgi_pass 127.0.0.1:9000;
    fastcgi_index index.php;
    include /etc/nginx/fastcgi_params;
    }

  16. diego says:

    Hi, on my Ubuntu 12.04 there is only apache2.2 from repos, but for mod_proxy_fcgi I need 2.4. Have you tested this on 12.04? Have you custom build apache2.4?

  17. lubosdz says:

    I also doubt about objectiveness of these benchmarks. 20 requests per sec is something wrong with environment setup. Also configuration (socket unix/tcp) may give different results. Even though HHVM is probably nicely performative, yet independent professional benchmarks would be more welcome, sorry :-)

  18. Carlos Icaza says:

    I tried to install using repository technique, but the package is missing for i386 architecture… Debian 7.2 – VirtualBox

  19. […] Du kan læse meget mere og se benchmark testene her. […]

  20. Thanks for the sharing this.Have anyone implemented it on windows server.

  21. Sang Le says:

    When will it support multiple instance. Example support for multiple site of apache or nginx.

    Current it hardcode in the /etc/hhvm/server.hdf, so i should copy the init.d script to another, define another port + document root for other site.

  22. […] the HHVM team used the Fibonacci benchmark (http://www.hhvm.com/blog/1817/fastercgi-with-hhvm) I tried the same. What I did was vary the number of fibonacci numbers computed and measured the […]

  23. Neil Girardi says:

    Hi,

    On Ubuntu 13.10 after running sudo apt-get install hhvm-fastcgi I get “E: Unable to locate package hhvm-fastcgi”

  24. Jim Norton says:

    on apt-get update
    I get this error:

    W: GPG error: http://dl.hhvm.com precise Release: The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY ….

  25. […] own Hiphop PHP compiler, which compiles PHP into C++ code to make incredible performance advances compared to the regular PHP interpreter and moderate performance advances over opcode caching. This system is being actively developed and […]

  26. Scott says:

    I’m a bit confused. Your first wordpress test shows 22 requests per second from php fpm, but my own nginx/php-fpm 2GB vps does over 1300 requests per second with apache bench against example.com/?p=1

    In the comments, you say php-fpm is ran with opcache, but the over 2 second response time disagree.

    My site does under 600ms pageloads with wordpress php-fpm.

    Something is wrong with your test.

    You either aren’t using opcache, or your hardware is really, really bad.

  27. Scott says:

    Server Software: nginx
    Server Hostname: localhost
    Server Port: 80

    Document Path: /?p=1
    Document Length: 0 bytes

    Concurrency Level: 50
    Time taken for tests: 15.860 seconds
    Complete requests: 1000
    Failed requests: 0
    Non-2xx responses: 1000
    Total transferred: 359000 bytes
    HTML transferred: 0 bytes
    Requests per second: 63.05 [#/sec] (mean)
    Time per request: 793.023 [ms] (mean)
    Time per request: 15.860 [ms] (mean, across all concurrent requests)
    Transfer rate: 22.10 [Kbytes/sec] received

    Connection Times (ms)
    min mean[+/-sd] median max
    Connect: 0 0 1.5 0 9
    Processing: 398 775 268.0 740 2047
    Waiting: 398 775 268.0 740 2046
    Total: 398 776 269.3 740 2055

    Percentage of the requests served within a certain time (ms)
    50% 740
    66% 796
    75% 846
    80% 884
    90% 972
    95% 1280
    98% 1871
    99% 1958
    100% 2055 (longest request)

    This is php-fpm with opcache with fastCGI cache off.

    What’s odd about it, besides being drastically faster than your results, is if I increase the concurrency to 500 from 50, the requests per second goes up to 434 requests per second

    • Julius Kopczewski says:

      You have only 359 bytes per request and only 22.10 of transfer? Did you install the sample data at all? Note that in my test there seem to be much, much higher transfer, and that’s despite fewer requests served. I would expect you are hitting a different page or your page contains different data.

      Either way, I will ask somebody to re-run the test when they are free.

      • Scott says:

        You make a good point.
        I actually had a full wordpress sitting there, but apparently when apache bench is run on /p=1, it’s actually having 404 errors, despite the fact that going to example.com/p=1 loads a real page.

        Testing with example.com/ works fine, example.com/p=1 fails.

        I also realized I had fastCGI cache set to 1s (which sits on tmpfs), which made a crazy performance difference.

        Server Software: nginx
        Server Hostname: unicornuproar.com
        Server Port: 80

        Document Path: /
        Document Length: 126772 bytes

        Concurrency Level: 100
        Time taken for tests: 7.630 seconds
        Complete requests: 1000
        Failed requests: 0
        Total transferred: 127048000 bytes
        HTML transferred: 126772000 bytes
        Requests per second: 131.05 [#/sec] (mean)
        Time per request: 763.048 [ms] (mean)
        Time per request: 7.630 [ms] (mean, across all concurrent requests)
        Transfer rate: 16259.83 [Kbytes/sec] received

        Connection Times (ms)
        min mean[+/-sd] median max
        Connect: 0 0 0.4 0 2
        Processing: 31 726 149.8 757 936
        Waiting: 18 700 147.4 735 885
        Total: 33 726 149.4 757 936

        Percentage of the requests served within a certain time (ms)
        50% 757
        66% 776
        75% 794
        80% 805
        90% 839
        95% 866
        98% 892
        99% 907
        100% 936 (longest request)

        This is ab -c 100 -n 1000 http://unicornuproar.com/

        • Julius Kopczewski says:

          So curious, are the updated results with or without the caching?

          • Scott says:

            Without caching.

            The setup is:
            Wordpress 3.8.1
            Arch Linux (current, rolling distro)
            nginx-custom-dev from aur
            2GB of ram (32GB ram box, rented from weloveservers.net)
            raid 10 hard drives of some variety (unknown)
            No fastCGI cache

          • Scott says:

            I should mention, 8 virtual cores on a xeon e3 1270v3 (quadcore with hyperthreading)

          • Scott says:

            How did I miss the most important parts…

            php 5.5 with opcache (the built in variation)

          • Julius Kopczewski says:

            I see, I will ask someone to rerun the test. I can’t do it right now myself because I’m terribly busy at least until 15th. Thank you very much for doing the tests!

  28. Julius Kopczewski says:

    Perhaps you’re getting 302 or something like that? Let me know if I read the stats correctly.

    • Julius Kopczewski says:

      In particular, take note of that “Non-2xx responses: 1000″. All results should be 200, and they were in my case.

  29. Ajinkya says:

    What is this hip hop virtual machine, does it do hip hop when you run any program on it?

  30. Julius Kopczewski says:

    No, only if you hit it hard.

  31. Scott says:

    I have a question.

    If you use nginx with fastCGI support, with hhvm, can you use nginx fastCGI caching?

    It should work, logically, but I’m just checking.

    • Julius Kopczewski says:

      Not sure how it works. I would have to understand what does the caching do on the protocol level. FastCGI is a stateful protocol, however the entire state is contained within a transaction. If therefore cache is used to skip an entire transaction I don’t see any issues. Otherwise I would have to understand the internals to know what is needed to support it.

      • Scott says:

        Thank you for your response.

        I don’t have the information to give you on that as I’m not a developer, just a guy who likes webservers.

    • Mark Jaquith says:

      Yep, it’s working just fine for me. Nginx doesn’t really care if it’s PHP-FPM or HHVM it’s sending the requests to. Its caching is unaffected.

  32. I’m happy to announce that Vagrant LNPP now includes optional support for HHVM FastCGI: https://github.com/kasperisager/vagrant-lnpp

    Thanks heaps for this article, it was a great help in getting things set up!

  33. mig5 says:

    Doesn’t seem to play nicely with Percona Server (drop-in replacement for MySQL) ?

    # /etc/init.d/hhvm-fastcgi start
    /usr/bin/hhvm: /usr/lib/libmysqlclient.so.18: no version information available (required by /usr/bin/hhvm)

  34. slurm says:

    test using wordpress:
    ab -c 100 -n 1000 http://localhost/wordpress/

    Without hhvm: (apache 2.4.6 + modphp 5.5.3)
    Requests per second: 108.79 [#/sec] (mean)

    with apache 2.4.6 + hhvm 2.3.1:
    Requests per second: 324.37 [#/sec] (mean)

    cpu: i5 760.
    ——————————————————
    Server Software: Apache/2.4.6
    Server Hostname: localhost
    Server Port: 80

    Document Path: /wordpress/
    Document Length: 7673 bytes

    Concurrency Level: 100
    Time taken for tests: 3.083 seconds
    Complete requests: 1000
    Failed requests: 0
    Write errors: 0
    Total transferred: 7914000 bytes
    HTML transferred: 7673000 bytes
    Requests per second: 324.37 [#/sec] (mean)
    Time per request: 308.293 [ms] (mean)
    Time per request: 3.083 [ms] (mean, across all concurrent requests)
    Transfer rate: 2506.87 [Kbytes/sec] received

  35. I’v set up nginx with hhvm but, any file .php returns me “Not found” except index.php that returns me “Running on HHVM version 2.3.2.

    Some of my configurations

    /etc/nginx/sites-enable/default

    location ~ .php$ {
    root /usr/share/nginx/www;
    fastcgi_pass 127.0.0.1:9000;
    fastcgi_index index.php;
    fastcgi_param SCRIPT_FILENAME /usr/share/nginx/www$fastcgi_script_name;
    include fastcgi_params;
    }

    /etc/hhvm/server.hdf

    Server {
    Port = 80
    SourceRoot = /usr/share/nginx/www/
    DefaultDocument = index.php
    }

    • Paul Tarjan says:

      That looks right. All your other files are in “/usr/share/nginx/www/” and the www-data user has access to read them?

      • Yes! I guess! :)
        -rwxrwxrwx 1 www-data www-data 383 Jul 7 2006 50x.html
        -rwxrwxrwx 1 www-data www-data 2759 Jan 5 14:17 default.txt
        -rwxrwxrwx 1 www-data www-data 51599 Set 27 11:47 image.php
        -rwxrwxrwx 1 www-data www-data 157 Jan 5 14:27 index.html
        -rwxrwxrwx 1 www-data www-data 22 Jan 5 14:48 index.php
        -rw-r–r– 1 www-data www-data 5 Jan 5 16:01 http://www.pid
        root@rack:/usr/share/nginx/www# pwd
        /usr/share/nginx/www

  36. Yes! I guess! :)

    -rwxrwxrwx 1 www-data www-data 383 Jul 7 2006 50x.html
    -rwxrwxrwx 1 www-data www-data 2759 Jan 5 14:17 default.txt
    -rwxrwxrwx 1 www-data www-data 51599 Set 27 11:47 image.php
    -rwxrwxrwx 1 www-data www-data 157 Jan 5 14:27 index.html
    -rwxrwxrwx 1 www-data www-data 22 Jan 5 14:48 index.php
    -rw-r–r– 1 www-data www-data 5 Jan 5 16:01 http://www.pid
    root@rack:/usr/share/nginx/www# pwd
    /usr/share/nginx/www

  37. Jim Walker says:

    I’m guessing this is not something that could ever be installed on a cPanel server?

  38. […] guys over at Facebook. To say that it is impressive, is an understatement – take a look this post. They got a WordPress installation running at 23 concurrent requests per second, all the way up to […]

  39. […] article follows on from a previous post which is here, which in turned followed on from this post here. I decided to write this post as I have a number of sites inside my development box and I wanted to […]

  40. Bob says:

    Hi all,

    For some unknown reason HHVM-fastcgi won’t fire the mysqli() function.

    I’ve found that HHVM is not supporting mysqli but I thought hhvm-fastcgi was…

    Can somebody help? More info: http://stackoverflow.com/questions/21392048/hiphop-fatal-error-class-undefined-mysqli/21392106

  41. […] on December 17th, 2013 the HHVM team announced FastCGI support. FastCGI is a protocol for a server's communication with the application […]

  42. Tried to run my framework eBuildy on it, everything is working fine, except its very slow (10x more than with PHP5.5+OpCache).

    I ran a ab test, I saw my CPU grows to 100% ;(

    Is there setting to tune this?

    Thanks,

  43. […] FasterCGI with HHVM 由于 hhvm.com 网站被Qiang,可以通过hhvm.cn的镜像查看。 […]

  44. […] Information on installing HHVM on other server distributions (including newer Ubuntu’s) can be found here. […]

  45. […] few things that are worth mentioning are the recent support for FastCGI and the integrated […]

  46. cabana says:

    HI
    I have a qustion. Would someone tell me why I can’t upload file over ftp funcion in PHP when I’m using HHVM?
    Maybe Someone had a similar problem and would tell me how to fix it ;)

  47. Nick says:

    I think maybe it should be mentioned that you’ll need to add ‘IP = 127.0.0.1′ to server.hdf.
    Otherwise (and this is my experience on Debian using the repos you have above) hhvm will bind to 0.0.0.0 and not localhost.

  48. Franky says:

    Have you ever run performance tests how good it performs when the cached WordPress Template comes out of a Ramdisk?

  49. Robert says:

    I’m using Ubuntu 13.10 with Apache2:
    I installed HHVM and it is running on port 9000…
    I cannot find these config files:
    mod_proxy.load
    mod_proxy.conf
    mod_proxy_fcgi.load
    Where can I download them from???

  50. Ruben de Vries says:

    fast-cgi can spawn a big pile of workers, hhvm is just one process

    how does this compare?
    is this configurable somehow?

  51. […] and they stand to gain a lot if they can make it faster, so they wrote their own runtime that’s much faster than the official one. This is exactly what the PHP world needed: making its already-fast […]

  52. Geo says:

    Trying to send SMTP mail i get error:

    Undefined function: stream_socket_enable_crypto

  53. […] need to run the same tests as the ones on the HHVM blog and compare the results. These test will be run against our project. Let’s see what we get! […]

  54. Crazyzurfer says:

    Doesn’t work for ubuntu 14.04 :(

Leave a Reply