[
  Download mod_gzip  
]
   
[
  Go to RCI's Home page  
]
Q: What is mod_gzip?
Q: What platforms does mod_gzip run on?
Q: Is mod_gzip a 'Proxy Server'?
Q: Do I need ANY 'extra' Client-side software to use mod_gzip?
Q: How does mod_gzip actually reduce the HTTP content?
Q: What is IETF Content-Encoding?
Q: How can I tell if my browser is able to receive IETF Content Encodings?
Q: Has mod_gzip been tested?
Q: Can I test mod_gzip with standard benchmarking software?
Q: Where can I get an HTTP 1.1 compliant benchmarking tool for Apache?
Q: Am I losing any actual content when using mod_gzip?
Q: Does mod_gzip have any HTML based information screens?
Q: Can the accelerated content be cached?
Q: How do I report a problem with mod_gzip?
Q: What about Remote Communications, the Company?
Q: How do I find out more?
Q: What about related links?
Some common installation questions
Q: How do I add mod_gzip to my existing Apache Web Server?
Q: How do I add compression statistics to my Apache log files?
Q: How do I get mod_gzip to only compress files from certain directories?
Q: How do I compile a new version of mod_gzip.c for my platform?
Q: What is mod_gzip?
    [Return to the index]
mod_gzip is a standard Apache Web Server module which acts as
an Internet Content Accelerator. Its function in life
is to become an integral 'part' of any existing Apache Web Server
and see that the content being delivered to YOU, the end-user,
is as small and as optimized as possible.
The Apache Web Server is by far the most popular and widely
used Web Server program in the world today with more than 60 percent
of the Server market and at least 10.6 million installations worldwide.
Q: What platforms does mod_gzip run on?
    [Return to the index]
Just about anything.
Since mod_gzip is simply a standard Apache Web Server module
then it runs/works on any platform supported by the Apache Web
Server itself.
Apache with mod_gzip runs on all popular Server platforms
( including Win 9x,NT,2000, Linux, FreeBSD, UNIX, etc. ).
Q: Is mod_gzip a 'Proxy Server'?
    [Return to the index]
No.
mod_gzip is a standard Apache Web Server module and
becomes 'part' of the the Apache Web Server itself.
Q: Do I need ANY 'extra' Client-side software to use mod_gzip?
    [Return to the index]
No.
mod_gzip does NOT require ANY
'extra' software to be installed on the Client side.
There is no 'Plug-in' or 'Client Proxy' of any kind. All you need is your
current HTTP 1.1 compliant browser.
All modern browsers released sinced early 1999 are already capable of receiving
compressed Internet content via standard
IETF
Content Encoding
if they are HTTP 1.1 compliant.
There are a number of commercial products available that call themselves
Internet or Network accelerators which are actually using nothing more
than the same publicly available techniques to reduce the content. Most
of these still require unnecessary client side Plug-ins or Proxy Servers.
mod_gzip is similar to any commercial product available and in
most cases out-performs the commercial products that are simply using
public domain
GZIP
and
'deflate'
compression methods and the published
IETF
Content Encoding standards.
Q: How does mod_gzip actually reduce the HTTP content?
    [Return to the index]
mod_gzip for the Apache Web Server is using the
well established
and publicly available
IETF
( Internet Engineering Task Force )
Content-Encoding standards in conjunction with publicy available
GZIP
compression libraries such as
ZLIB
( Copyright © 1995-1998
Jean-loup Gailly
and
Mark Adler
)
to deliver dynamically compressed content 'on the fly' to any browser or
user-agent that is capable of receiving it.
mod_gzip also automatically takes care of any situations where
requests are being made by a browser or other HTTP user-agent that
is not HTTP 1.1 compliant and is incapable of receiving IETF
Content-Encoding ( or any other kind of encoding or compression ).
In those cases, mod_gzip will simply either use other methods
to optimize the content as best as it can for the non-HTTTP 1.1
compliant requestor or will simply return the response(s) 'untouched'.
More advanced versions of mod_gzip contain compression and content
reduction methods that are much more sophisticated than simple IETF
Content-Encoding and provide levels of performance that are impossible
to achieve using simple public domain GZIP or IETF Content-Encoding techniques.
Whereas standard GZIP compression is typically only able to provide
a certain low average level of compression, RCI has other methods
and algorithms ( some patented and others patent-pending )
for compressing Internet content that can consistently provide
better than 94 percent compression on any HTML, XML, WML or text
based data stream(s).
Q: What is IETF Content-Encoding?
    [Return to the index]
In a nutshell... it is simply a publicly defined way to compress
HTTP content being transferred from Web Servers down to Browsers
using nothing more than public domain compression algorithms that
are freely available.
"Content-Encoding" and "Transfer-Encoding" are both clearly defined
in the public IETF Internet RFC's that govern the development and
improvement of the HTTP protocol which is the 'language' of the
World Wide Web itself.
See [   Related Links   ].
"Content-Encoding" was meant to
apply to methods of encoding and/or compression that have been
already applied to documents BEFORE they are requested. This is also
known as 'pre-compressing pages'. The concept never really caught on
because of the complex file maintenance burden it represents and there
are few Internet sites that use pre-compressed pages of any description.
"Transfer-Encoding" was meant to apply to methods of encoding
and/or compression used DURING the actual transmission of the data
itself.
In modern practice, however, and for all intents and purposes,
the 2 are now one and the same.
Since most HTTP content from major online sites is now dynamically
generated the line has blurred between what is happening BEFORE a
document is requested and WHILE it is being transmitted. Essentially,
a dynamically generated HTML page doesn't even exist until someone asks
for it so the original concept of all pages being 'static' and already
present on the disk has quickly become an 'older' concept and the
originally defined black-and-white line of separation between
"Content-Encoding" and "Transfer-Encoding" has simply turned into a
rather pale shade of gray.
Unfortunately, the ability for any modern Web or Proxy Server
to supply 'Transfer-Encoding' in the form of compression is
even less available than the spotty support for 'Content-Encoding'.
Suffice it to say that regardless of the 2 different publicly defined
'Encoding' specifications, if the goal is to compress the requested
content ( static or dynamically generated ) it really doesn't matter
which of the 2 publicly defined 'Encoding' methods is used... the result
is still the same. The user receives far fewer bytes than normal and
everything is happening much faster on the client side.
The publicly defined exchange goes like this...
1. A Browser that is capable of receiving compressed content
indicates this in all of its requests for documents by supplying
the following request header field when it asks for something...
Accept-Encoding: gzip, compress
2. When the Web Server sees that request field then it knows
that the browser is able to receive compressed data in one
of only 2 formats... either standard GZIP or the UNIX 'compress'
format. It is up to the Server whether it will compress the
response data using either one of those methods ( if it is
even capable of doing so ).
3. If a static compressed version of the requested document is
found sitting on the Web Server's hard drive which matches
one of the formats the browser says it can handle then the
Server can simply choose to send the pre-compressed version
of the document instead of the MUCH larger uncompressed original.
4. If no static document is found on the disk which matches any
of the compressed formats the browser is saying it can 'Accept'
then the Server can now either choose to just send the original
uncompressed version of the document OR make some attempt to
compress it in 'real-time' and send the newly compressed and
MUCH smaller version back to the browser.
Most popular Web Servers are still unable to do this final step.
The Apache Web Server has 66 percent of the Web Server market
and it still incapable of providing any real-time compression
of requested documents even though all modern browsers have been
requesting them and capable of receiving them for more than 2 years.
Microsoft's Internet Information Server is equally deficient. If
it finds a pre-compressed version of a requested document it might
send it but has no real-time compression capability.
IBM's WebSphere Server has some limited support for real-time
compression but it has 'appeared' and 'disappeared' in various
release versions of WebSphere.
The VERY popular SQUID Proxy-Caching Server from NLANR also
has no dynamic compression capabilities even though it is
the de-facto standard Proxy-Caching softtware used just
about everywhere on the Internet.
The original designers of the HTTP protocol really did not
forsee the current reality whereby so many people would be
using the protocol that every single byte would count. The
heavy use of pre-compressed graphics formats such as .GIF
on the Internet and the relative inability to reduce the
graphics content any further than the native format itself
makes it even MORE important that all other exchange formats
be optimized as much as possible.
The same designers also did not forsee the current reality
where MOST HTTP content
from major online vendors is generated DYNAMICALLY and so
there really is no chance for there to ever be a 'static'
compressed version of the requested document(s).
Public IETF Content-Encoding is still not a 'complete' specification
for the reduction of Internet content but it DOES WORK and the
performance benefits achieved by using it are both obvious and dramatic.
Q: How can I tell if my browser is able to receive IETF Content Encodings?
    [Return to the index]
If your user-agent (browser) is adding Accept-encoding: gzip to
any GET request that it sends then it is trying to indicate to a
Web Server that it is capable of receiving IETF Content Encodings.
Whatever encoding schemes a user-agent (browser)
is able to receive will be listed after the colon on the
Accept-encoding: request header line.
If you don't know how to 'see' what your user-agent (browser) is sending
there is an easy way to tell.
RCI maintains an online Connection speed test link that
will tell you exactly what your browser is sending, what it is capable
of receiving, and will give you a report on the performance increase
you can expect to see over your current connection when you are
receiving compressed Web content.
Just go to the following URL to perform the test on whatever
connection you choose and whatever user-agent (browser) you want
a report on...
http://12.17.228.52:7000/
NOTE: Port 7000 is a valid 'safe' port at 12.17.228.52 but if you
are behind a firewall that won't even allow your browser to request
anything from any port other than HTTP port 80 then you probably
will not be able to run this test. Contact your LAN administrator
about allowing access to external ports other than HTTP port 80.
The connection test will begin immediately and will cycle through
4 screens that simply say...
Your connection is being evaluated... X
'X' will be a number that will change from 1 through 4 and then
the 'final report' should appear.
That 'final report' will look like this...
Begin: Example connection test report
    |
Speed Test Thermometer |
Your current IP address is
216.60.210.59
Your connection will support an
actual byte transfer rate of
1924.4  / 
8659.9
bytes per second.
15395.3
 /  69278.9
Kbps
Your browser is capable
of receiving CHTML®
Content Compression.
Your browser is capable
of receiving GZIP Content Encoding.
To repeat the test do NOT press RELOAD. Just press the button below...
|
Kbps Rating   |
  |
* |
* |
  |
  Bytes per second |
T-1   |
--- |
- |
- |
--- |
  187500 |
  |
--- |
- |
- |
--- |
  100000 |
  |
--- |
- |
- |
--- |
  50000 |
  |
--- |
- |
- |
--- |
  20000 |
Full ISDN   |
--- |
- |
- |
--- |
  12000 |
  |
--- |
- |
- |
--- |
  9000 |
  |
--- |
- |
- |
--- |
  8400 |
  |
--- |
- |
- |
--- |
  7800 |
56.6k   |
--- |
- |
- |
--- |
  7200 |
  |
--- |
- |
- |
--- |
  6600 |
  |
--- |
- |
- |
--- |
  6000 |
  |
--- |
- |
- |
--- |
  5400 |
  |
--- |
- |
- |
--- |
  4800 |
  |
--- |
- |
- |
--- |
  4200 |
28.8k   |
--- |
- |
- |
--- |
  3600 |
  |
--- |
- |
- |
--- |
  3000 |
  |
--- |
- |
- |
--- |
  2400 |
14.4k   |
--- |
- |
- |
--- |
  1800 |
  |
--- |
- |
- |
--- |
  1200 |
  |
--- |
- |
- |
--- |
  600 |
Uncompressed   |
  |
  |
  |
  |
  Compressed |
|
Remote Communications, Inc. @ http://www.RemoteCommunications.com
Header information from your browser...
GET / HTTP/1.1
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
Accept-Language: en-us
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 5.0; Windows NT; DigExt; TUCOWS)
Host: 12.17.228.52:7000
Connection: Keep-Alive
Cookie: CFTOKEN=56867408; CFID=45639
Test buffer 1 = Yahoo's home page
Test buffer 1 length = 12434 bytes
Total bytes compressed = 2480 bytes
Bytes per second compressed = 8659 bytes