1 members (Scott35),
545
guests, and
13
robots. |
Key:
Admin,
Global Mod,
Mod
|
|
|
Joined: Feb 2005
Posts: 693
Member
|
Nick, I just made the changes you suggested, except I went to only ten for maxrequests, out of respect for the reasons mentioned in responses. Nonetheless, wow!
By the way, for information to others, double-click on the item lines to change them.
Larry Fine Fine Electric Co. fineelectricco.com
|
|
|
|
Joined: Mar 2005
Posts: 212
Member
|
I made the changes a few hours ago according to Nick's suggestions and I just went back and changed maxrequests to 4 because of the comments. I don't really understand the difference but out of concern for some poor slow server I did it anyway. The difference is dramatic and I have cable internet that pings like a T3. I can imagine it makes a slow connection almost exciting.
|
|
|
|
Joined: Jul 2004
Posts: 625
Member
|
I disagree. If more people start using more efficient browsers (i.e., ones that are allowing many simultaneous connections), then the server sites will eventually upgrade their server software to support that usage. In other words, "Necessity is the mother of invention." Give them a necessity, they'll find a fix.
|
|
|
|
Joined: Nov 2002
Posts: 456
Member
|
I have been using Mozilla or firefox for about two years now. On the computer I am using now, I think it is the only browser, as it runs Linux.
|
|
|
|
Joined: Sep 2004
Posts: 93
Member
|
To SolarPowered:
You are gaining only false efficiency by tweaking that parameter for one thing, because the download speed is always determined by the slowest link in the connection from you to the host; this is not always the most direct path, and it is totally beyond your control. The link to your host is not necessarily as fast as your link to your ISP, even, and it will be shared between all those port 80 "connections" to your browser in any case, as well as all the other users, with additional overhead for each one. That is why, after a lot of testing and tuning, the default numbers were chosen: they provide optimal efficiency in most cases and avoid quite a bit of unprofitable overhead.
And for another reasoning error, the ability to over-ride them has existed ever since Netscape 3 and Internet Explorer 3. Early on in the days of those browsers, similar documents went around, and I'm tired of this again and again: for once, in MS's favour, the dratted parameter was stored in the registry so it was difficult to get at, and people stopped doing it.
So there is no sudden deployment of more efficient browsers.
The RFC for HTTP recommends not to overload servers, the same way the wiring codes tell us not to simply stick bigger fuses in when a circuit keeps overloading. Would you plug in a 16A electric heater into a 10A socket and say "Well, they will change all the wiring when I start doing this because necessity is the mother of invention?" Who is "they", in any case, and where are they getting the money from to pay for all these new servers? The software doesn't need changing. It is mostly upgrades to *hardware* needed to support more connections, and that costs a lot of money.
|
|
|
|
Joined: Jul 2002
Posts: 717
Member
|
I surf the web using Safari. It rocks.
|
|
|
|
Joined: Sep 2004
Posts: 93
Member
|
|
|
|
|
Joined: Jul 2004
Posts: 625
Member
|
jooles, I apologize if my post came off a bit antagonistic. I come from a background of being one of four guys at Intel who developed the original Intel Network Architecture; the guy in the cubicle next to me literally wrote the Ethernet spec; I and one other guy implemented the first operating 10 megabit Ethernet; I wrote the operating system that ran the Intel Network Architecture.
I have a really, really good idea of what this stuff is capable of doing.
And I get really frustrated at how badly most of the stuff out there is implemented. I think that came through a bit in my previous post.
A couple bits of "low-hanging fruit" that could easily be addressed: 1) If they didn't design their web pages in the first place as a gazillion teeny-tiny objects that each require a separate connection to retrieve, this issue wouldn't even exist. They have complete control over that--if they consolidate their web pages into single objects, then browsers will only open a single connection. 2) Stop forking off new tasks for every connection. fork is a very expensive operation. Organize the system as server tasks that simply activate a record in a connection data structure to create a new connection.
Anyway, enough of my rant. I wish I had the time to bring a decent network operating system to market. The basic OS is actually pretty easy to do, but coming up with device drivers for all the different video controllers, network adapters, disk controllers, disk drives, printers, etc., out there is a massive undertaking. And expensive, since you have to own one of everything that exists in order to test it.
|
|
|
|
Joined: Sep 2004
Posts: 93
Member
|
No antagonism at all; no problem. Then does that mean that Bob Metcalf, who invented Ethernet at xerox before starting 3com, is working with you? That's cool. I thought the specification for the 10mbps baseband network became a general IEEE standard, 802.2, in the 1980s, but I'd forgotten that at one point there was the DEC/Intel/Xerox group offering a similar thing. The mode of pre-existing sessions / connections you proposed is far more similar to the way the mainframe does it, and it is indeed a lot more efficient. The other thing one notices though is that the fork() call and its equivalents carry different overheads in different kernels or environments. Java and NT are pretty heavy, but most unix kernels seem to have a comparatively low overhead for starting a process. At work we've all been told to read this article here, and find out more about this sort of thing from wherever we can with a view to the next phase of our project. (we will be delivering real-time streamed proofs of printers plates for approval etc before sending them to the plate-making machine, and of necessity at that point is is all 'tiny bits', as rendering it into colour-separated TIFFS does not occur till after the approval. http://www.adaptivepath.com/publications/essays/archives/000385.php It is using a similar approach to that which gives the remarkable new map application in Google its superb responsiveness, and does so *without* bringing the servers to their knees. My favourite OSs for networks are OS390 and Solaris. Would you follow either of those designs, or do you prefer another design?
|
|
|
|
Joined: Jul 2002
Posts: 717
Member
|
Nope, not yet.Currently running Panther. I probably won't until a hardware upgrade is required, but in that case if personal history repeats itself That will be a long time - first mac purchased was a mac classic in 1988. It still works fine but had only 4 meg memory, and 40 meg harddrive. Second mac was purchased 1993. Still works fine, but let one of my kids have it for school. This Imac purchased last year.
|
|
|
Posts: 43
Joined: September 2013
|
|
|
|