Ce diaporama a bien été signalé.
Nous utilisons votre profil LinkedIn et vos données d’activité pour vous proposer des publicités personnalisées et pertinentes. Vous pouvez changer vos préférences de publicités à tout moment.

What is ncache?

3 691 vues

Publié le

a web cache system base on nginx web server. faster and more efficient than squid.

The large storage can save over 30,000,000 caches
The self sort share memory hash index
Use linux epoll, sendfile and aio to improve the performance
Base on the fastest web server framework : nginx
The high throughput and high concurrent volume of the cache request
Without http headers cache
Low cpu cost and low iowait
Memory cache the hottest data use HAC
Aio queue with lio_listio function


  • Soyez le premier à commenter

What is ncache?

  1. 1. NCACHE <ul><li>The fast web cache server base on nginx </li></ul><ul><li>Use aio sendfile and epoll modules </li></ul><ul><li>The self sort share mem hash index </li></ul><ul><li>High performance and large storage </li></ul><ul><li>Low cpu cost and low iowait </li></ul><ul><li>Record lock instead of process lock </li></ul><ul><li>Without http headers cache </li></ul>
  3. 3. STRUCTURE Be proxy hash index Init by Ngx master process when nginx is start on Ngx worker process Ngx worker process Disk Files Read / write by file system or raw dev Backend server Backend server Backend server Body filter Get the proxy content and save into the disk by aio
  4. 4. Logic Diagram Request Request Find cache in index found Not found Timeout? not yes Sendfile output Proxy backend Body filter Writev output Aiowrite fresh index
  5. 5. The self sort share mem hash index 2(5) 3(4) First floor of hash index List to solve the conflict of the hash Hash_malloc 1(6) Index[1]+2 = 7 1(6) 3(4) 2(7) Top:0 16777216 33554432 If arrived at the bottom of the share memory then ncache will return to the 16777216 point and find which can be reused
  6. 6. Record lock Mmap auto sync Mem index Sync file Worker process Worker process Read Write Worker process Do not need to lock any worker process or request cause wait not cause wait not cause wait
  7. 7. Performance between SQUID 1 First: cpu last: io Blue is ncache
  8. 8. Performance between SQUID 2 SQUID NCACHE
  9. 9. Future <ul><li>The aio_sendfile function </li></ul><ul><li>Compress share memory hash index </li></ul><ul><li>Memory cache the hottest data </li></ul><ul><li>Raw device read and write </li></ul><ul><li>Distribute storage system </li></ul><ul><li>Aio queue with lio_listio function </li></ul>
  10. 10. The end <ul><li>Google code: </li></ul><ul><ul><li>http:// code.google.com/p/ncache / </li></ul></ul><ul><li>Nginx wiki: </li></ul><ul><ul><li>http://wiki.codemongers.com/ </li></ul></ul><ul><li>Our mail: </li></ul><ul><ul><li>[email_address] </li></ul></ul><ul><ul><li>[email_address] </li></ul></ul>