Talk:Virtual memory/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1

Lead

I removed:

There is a common misconception that virtual memory is for providing more computer storage to software than actually exists. Though useful, this is not the only use. A computer's physical memory address space is shared by RAM, ROM and input/output. Of these only RAM is available for use by application software. The RAM might be spread across the system's address space, and interspersed with ROM and input/output. This layout varies from computer to computer. Without virtual memory, software would have to be modified to run on each particular computer. Virtual memory hides the physical addresses from software, permitting software vendors to sell precompiled software.
Actually, I think that is not a misconception. Systems have been built with protection and relocation but no ability to provide more apparent memory than actually exists. These were not usually called virtual memory systems.

Because I really hate it when people respond to errors in one paragraph by adding another that contradicts it. If someone can reconcile these, even crappily, please but them back in a sensical form. Tuf-Kat

I agree. One of the greatest advantages of virtual memory is providing more storage than available RAM. As it stands, the article doesn't mention it at all. I'm not enough of an expert to provide a coherant discussion (at least not right now), but not mentioning it at all makes the entry erroneous. —Frecklefoot 18:29 13 Jun 2003 (UTC)

Is the link for determinism right? It would seem from context that the author was referring to an optimization technique that many processors use, not the age-old philosophical debate. —Preceding unsigned comment added by 68.5.250.201 (talk) 17:19, 4 February 2009 (UTC)

Paging != virtual memory

This article is somewhat poorly organized. It confuses three separate notions:

  • the provision of more CPU-addressable memory than the machine actually has main memory (which is what is properly known as 'virtual memory'), often as part of a multi-level storage architecture
  • paging, which was added to the above mostly as a memory allocation strategy, to obviate the need for copying stuff around, and also to allow optimization of the size of the objects moved between the main memory and disk memory levels of the MLSA
  • protection of supervisor memory from user programs

You can do the first with segmentation hardware alone, and a number of early computers did so. You can do the third without either of the first two, and again some early systems (e.g. the early 360 machines) did so.

I don't have the energy at this time to rewrite this article, and the paging article, to make these distinctions clear, but will do so Real Soon Now. Noel 13:17, 13 Sep 2003 (UTC)

I've updated the introduction, which is at least a start on this, however there perhaps could be more work done on the rest of the article. Likewise, I don't have the energy to do the whole thing, but may have a crack at it when I get some spare time.Guinness 16:52, 26 November 2005 (UTC)

Someone please link computer concepts in texts for non-computer geniuses. This will help make this article more understandable for the average person. Thanks. Thermidoreanreaction 13:09, 15 March 2007 (UTC)(TR)

Remark

In the first paragraph, it tells us the computer processes are not limited to physical memory size, due to VM. This is not correct; you can't have more VM than your total physical memory, VM is just that you let more medias be available for allocatable memory.  Sverdrup (talk) 11:29, 14 Dec 2003 (UTC)

This article is using "physical memory" to mean RAM - i.e. disk is not included in "physical memory". So the statement is accurate.
I think this is just a matter of terminology. I suggest using "Primary Storage" to refer to what people generally call RAM. As a side issue, you could have more memory than the total physical memory simply by incorporating data compression into your virtual memory system. Guinness 23:03, September 3, 2005 (UTC)

Rewrite

I went to fix an error in the article (it said all TLB refills, when a translation is not in the TLB, were done under software control, which is not true - most CPU's refill the TLB cache from the page tables in main memory without taking an exception) and I simply couldn't find a simple way to fix the article (see my comments above about how it mixed up paging and virtual memory). Every time I fixed something, it made some larger part of the article not work. So I finally wound up rewriting much of the article.

It now treats virtual memory separately from paging. (A number of systems did VM without paging, most notably the PDP-11, but also machines like the GE-645, which supported both paged and unpaged segments.) It still needs more work, but IMNSHO we're closer to where we ought to be than we were before. Noel (talk) 04:41, 1 Dec 2004 (UTC)

What is virtual memory anyway?

I removed the text:

A computer usually runs multiple processes during its operation. Each of these processes has its own address space, the area in main memory in which it stores information. It would be very expensive to give each process the entire memory space available because most processes use only a small portion of the main memory at any given time. Virtual memory divides physical memory into sections and allocates them to different processes.

because it's flat incorrect. What it describes (the division of actual physical memory between various processes) is not virtual memory, but rather plain old multiprocessing. Many early OS's (e.g. OS/360, in its early MFT and MVT versions) provided exactly what is described in that paragraph, but were most definitely not virtual memory systems.

That division of memory can be provided by "base and bounds registers" (such as provided on early PDP-10s and System/360 machines), without in any way providing virtual memory.

Both segmentation and paging have been used to provide virtual memory (the latter, as described in the article), but both are more allocation and/or implementation techniques which can be used to provide virtual memory than virtual memory in and of themselves. (I.e. you can have paging as a main memory allocation technique without having virtual memory.) Noel (talk) 19:40, 6 Dec 2004 (UTC)

My professor made it quite clear in class that the main reason we use virtual memory is not to make main memory appear larger- this was what I suggested when he asked the question so I made a strong mental note. VM was invented to alleviate the burden of managing the two levels of memory represented by main memory and secondary storage. Before VM programmers were responsible for moving overlays back and forth from secondary storage to main memory. However, in modern day computers because of modern day memory sizes a programmer would almost always be able to load his entire program into main memory without having to deal with these overlays. Rather than simply size and space issues, it has much more to do with relocation of data (allowing the same program to run in any location in physical memory), and protection of that data (preventing one process from modifying another's code/data). It is true that one of the features of virtual memory is that it gives the appearance of larger main memory but this is not often used because programs very rarely exceed modern memory capacities.
This is sort of a "six/half-dozen" argument, because "make main memory appear larger" and "alleviate the burden of managing two levels of storage" are two sides of one coin - they are really just different ways of saying the same thing. You can't get the programmer out of the business of explicitly managing multiple levels of storage unless it looks to them like they have a large enough main memory that they don't need secondary storage. That's why they called it virtual memory, right? Why do you think they picked that name?
As to the relocation issues, again, you don't have to have virtual memory to do that, and many 1960's computers did so - e.g. the KA-10 model of the PDP-10, which gave each process a private address space starting at location 0, but had no support for virtual memory. Ditto for protection.
As to the size of modern memories, it's true that they are now so large you don't need virtual memory as much - perhaps if we'd had memories that large in the 1960's, we'd never have bothered with virtual memory, and stuck with the simpler memory management mechanisms that just provided relocation and protection. (Although we'd have probably wanted paging too, to make memory allocation easier.) Noel (talk) 05:02, 8 Dec 2004 (UTC)
PS: The difference between a paging/relocation/protection system with and without virtual memory, of course, would be that for the without case, you wouldn't need the ability to take a page fault - i.e. have an intruction execution stop because a memory reference wasn't able to complete, and be able to restart (or continue) that instruction later, after the memory was available. With enough memory, all of a process' memory would always be in main memory when it was running, and you'd never need to be able to take a page fault. —Preceding unsigned comment added by Jnc (talkcontribs) 01:36, 8 December 2004
The difference between paging and segmentation with respect to virtual memory is simply how the code/data gets divided. Paging implies a constant size overlay while segmentation allows for varying sized overlays.
Yes and no. You're right that paging uses fixed size units, but there may be a lot more to segmentation than the size. (I say "may be" because not all systems with segmentation have more to them.)
However, in some (e.g. Multics, as well as a number of later systems which copied it, such as the IBM System/38, the Prime machines, etc), the segmentation was actually visible to the user processes, as part of the semantics of the memory model provided to processes. In other words, instead of a process just having a memory which looked like a single large vector of bytes (or words or whatever), it had more structure. This is different from paging, which doesn't change the model visible to the process. This can have important consequences.
And no, it wasn't a kludge (as in the 80286, say) - in Multics, at least, the segmentation was a very powerful mechanism that was used to provide a single-level model, in which there was no differentiation between "process memory" and "file system memory" - a process' active address space consisted only a list of segments (files) which were mapped into its potential address space (both code and data). And no, it's not the same as the mmap() model in later versions of Unix, etc, because inter-"file" pointers (both code and data) don't work if people are mapping files into semi-arbitrary places, at least not without a lot of extra instructions as overhead. Multics could do relocated inter-segment references as an addressing mode on most instructions. (See the "Multics" book by Organick if you want to know more about how this worked - different processes could map the same segment into different places in their address space and it all still worked.) Noel (talk) 05:02, 8 Dec 2004 (UTC)
I am getting this from my textbook and not making it up off the top of my head, nor am I copying text directly from the book. --Underdog 19:00, Dec 7, 2004 (UTC)
Understood. I hope my comments above were useful. Noel (talk) 05:02, 8 Dec 2004 (UTC)

Why virtual memory

I removed:

Today, however, this is not the primary reason virtual memory is used.

because i) it didn't say what the primary reason is now, and ii) I was unable to add said reason (AFAIK the reasons to use virtual memory remains as it always was - to avoid burdening the programmer with the details of multi-level storage management, and simplify programs). If someone would care to fill in what the reason is, we can add this back. Noel (talk) 20:11, 6 Dec 2004 (UTC)

As a result of the discussion above, I think I understand what you meant here (that the ratio of "desired real memory" to "actual real memory" is much closer to one now, so that making the computer's main memory look much larger than it is really is a lot less important now). Is that right? I will modify the article to say this; let me ponder how best to say it. Noel (talk) 05:12, 8 Dec 2004 (UTC)

History

I'm surprised not to see a history section so I added one, cobbled together from http://www.cne.gmu.edu/itcore/virtualmemory/vmhistory.html and http://www.economicexpert.com/a/Memory:page.htm - but I am not convinced it's definitive. Further updates welcome. (To the section on early personal computers, it's tempting to add the Bill Gates quote "640KB ought to be enough for anyone", but I believe it's apocryphal). joe 3 July 2005 21:54 (UTC)


Modified the last sentence to mention that Apple's System 7, which preceed Windows 3.1 by almost a year in terms or virtual memory. Hopefully, someone may write an OS X "example" to balance what has been written for Windows and Linux on this page already. —Preceding unsigned comment added by 69.14.114.108 (talkcontribs) 23:34, 17 September 2005


I've been told that the Burroughs B5000 had virtual memory using segmentation when it was released in 1961. It certainly had it early on but I'm not certain it was in the earlies systems. I'll try to find something definitive and modify the article when I get it. --JeffW 22:55, 14 February 2006 (UTC)

According to the history page at the Unisys web site (www.unisys.com) the B5000 was the first dual processor and virtual memory computer. --JeffW 23:02, 14 February 2006 (UTC)

Swapping to RAM disks?

I wonder whether the following statement from the article is correct: Systems with a large amount of RAM can create a virtual hard disk within the RAM itself. This does block some of the RAM from being available for other system tasks but it does considerably speed up access to the swap file itself.

I have never heard of putting the swap file on a RAM disk, and I don't think it makes sense. Wouldn't it be better if the memory used for the RAM disk were available as "plain" memory? What is the benefit of swapping from RAM to RAM? Of course a RAM disk may speed up programs which use temporary files a lot, instead of (virtual) memory. But that has nothing to do with paging and the swap file. —Preceding unsigned comment added by 130.83.244.129 (talkcontribs) 04:03, 5 August 2005

I've tried to wrap my brain around this statement once more. Maybe the author meant that relocating other frequently accessed files to a RAM disk can speed up access to a swap file which remains on a hard drive. But even if that is the case, it does not do a very good job of explaining this. IMHO this part hurts the understanding of virtual memory more than it helps. —Preceding unsigned comment added by 130.83.244.129 (talkcontribs) 10:26, 9 August 2005

See http://kerneltrap.org/node/3660?from=100&comments_per_page=50 - though for me it's still complete nonsense - I just can't understand why should I use part of my RAM for swap partition when I could use it normally, in the usual way. --Anthony Ivanoff 09:21, 10 August 2005 (UTC)

Just my one cent here... Maybe what is meant by this article is this : In 32 bit world, processes can only run in the lower 4GB portion of physical memory, but the processor and motherboard are able address much more (e.g. 16GB). Although it is impossible to convince the kernel to allocate process memory in the physical space above 4GB, it is still possible to use it as a ram disk where to put the swapfile. I agree it makes more sense to modify the kernel, but what if you can't do that (i.e. you're using Windows) ? --Stephan Leclercq 06:51, 11 August 2005 (UTC)

Bit of speculation: Apart from the issue with the 4GB limit described above, in most cases running a ram disk to swap to takes away space from primary store and incurs extra overhead to manage the ram disk. It's a net loss. However, that assumes we're talking about ram internal to a single host. In a networked storage environment with fast connections the economics could be different. Up til now the interfaces would be a bottleneck, but dual 4GB/Sec full duplex fiber channel or Infiniband data can be pulled off a san very quickly. A hardware ram disk provides it's own management resouces, so the host doesn't get stuck with the extra work. If several hosts page to dymically sized paging files on a shared ram disk the paging space allocation among the hosts could vary as needed and the usual drawbacks of paging to a file probably would not apply. This is pricey gear. Whether such a configuration would be worthwhile would depend on what actuall hardware costs turn out to be and whether the virtual address space requirements on the hosts vary enough to merit the investment. Could be a neat solution, but only if you have the right problem. -ef —Preceding unsigned comment added by 68.38.203.239 (talkcontribs) 01:58, 26 January 2006

Why is it considered wise to have double the amount of actual RAM for a swapfile/swap partition?

I keep hearing that it's best to have a swap partition (or pre-allocated swap file, whatever you choose) be twice the size of the amount of actual RAM in your computer. Why twice that amount? Especially if you have 1.5-2GB of RAM and are not the type of person who will ever have several dozen memory-hog applications running at once. --I am not good at running 22:49, 17 September 2005 (UTC)

It's just a rule of thumb. If you have 1.5-2GB of RAM you probably have 150-200GB of hard drive. Sparing 1% of this for swap isn't too much, is it? It effectively doubles your memory. Most people would never need 4GB of memory (2 phys and 2 swap)...but the users that do need 2GB of phys likely want some breathing room beyond that 2GB. Justforasecond 06:07, 5 March 2006 (UTC)

Note that contents of this Virtual memory article discussed above, pertaining page files or swapping, have been recently moved to Paging article. --Kubanczyk 20:46, 4 October 2007 (UTC)

Diagram terminology

The terms on the diagram should probably be updated:-

Virtual Memory -> Virtual Address Space
Physical Memory -> Primary Storage
Hard Disk -> Secondary Storage

In fact, in theory, tertiary storage could be used in place of the hard disk. Although obviously this would be A Bad Idea, it may be worth clarifying the distinction between primary storage, and anything other than primary storage.Guinness 17:02, 26 November 2005 (UTC)

Paging file fragmentation discusion needs review

I'll yeild to those with more expertise on the theoretical issues. Personally the point I find most important, ie, that virtual memory refers to a logical address space that is indepenant of the physical memory architecture was made well.

Incidentally, thanks for this comment. It's nice to know one's efforts are appreciated :) Guinness 00:58, 25 February 2006 (UTC)

I have questions about the later sections about "myths" about the windows paging file and the subsequent section on virtual memory in linux.

I have personally encountered strange "apparent memory problems" that were only remedied by setting a static paging file size and defraging the disk. I found the solution in an O'Rielly book, and while I realize that O'Rielly is not infalible, I suspect they vet their material reasonablely well. Since then I have had friends who've had the similar problems and before recommending the same fix I've surfed the web for updates and found numerous reports of users having similar problems and fixing them the same way. And when i suggested trying the same fix, it worked. I also seem to recall (though I could be mistaken) that Microsoft actually announced a fix for this problem at one point, but apparently it didn't work. (Please correct if this is wrong.) It could be that fragmentation per se is not the cause of the symptoms, but empirically the symptoms and the cure seem consistent with that explanation. I suppose this could be coincidental, but if so I'd love an explanation. Incidentally, this is the first suggestion I've seen anywhere after a lot of looking that this problem is not real.

I find the rational for this not being a problem to be weak. To begin with, the author overlooks the fact that constantly resizing the file incurs overhead. While I apprecialte that in a multitasking environment the disk will tend to seek around a lot and a bit of fragmentation won't make much practical difference, if the paging file gets fragmented badly enough eventually it could. Unfortunately, I don't have great expertise on NTFS internals so can't comment on how much it is affected by fragmentation. If the paging file is on a FAT file system, fragmentation problems seem likely. In a desktop computing environment where the user can boot up and run a calculator desk accessory then launch an office suilte and a graphics program all at once then quit everything and just read email for a while, demand for memory can change significantly and often. If the paging file is being raidcally resixed often and the disk is crowded, problem fragmentation seems likely.

User programs can vary in the way they access memory. Typically, programs access only a small section of their allocated memory at a time and the boundaries of that area tend to change gradually. However, there are exceptions and those exceptions tend to be less gracefull about paging. So sizing a static paging file requires some knowledge of the programs that will be running. An OS vendor cannot know that in advance. I suspect Windows defaults to a dyamic paging file so that any application the user runs will run reasonably well right out of the box. Users who understand the requirements of their software are in a position to size static paging files.

The next section on Linux virtual memory raises more questions. This section says that Linux is uaually configured to page to a raw partition to avoid file fragmentation problems. If fragmented paging files are not a problem for Windows why would they be a problem for Linux? Generally UNIX style file systems do not rely on contiguous block allocaton, so one would expect this to be less of a problem for Linux than for Windows. I suspect the key advantages of paging to raw disk are that the overhead required to manage the filesystem layer is eliminated and that the space cannot be encroached upon by other files.

I don't know if I'm just not getting it, or if there are some errors, or if more explanation is called for. But as it stands, whet's there seems ito be either inconsistant, unclear, or both. -ef —Preceding unsigned comment added by 68.38.203.239 (talkcontribs) 26 January 2006

This whole argument about page file fragmentation being a performance hit falls apart when you consider that the Windows file very rarely changes size, because you have to fill all available physical memory *and* all available pagefile space (which in a default Windows XP configuration is another 150% of your total physical memory). You get a balloon message when the page file is being resized, and at no other time. If you aren't seeing that balloon, your page file isn't resizing. If you *are* seeing that balloon, your real problem is that you don't have enough physical memory to do what you want to do with the machine -- in which case, a fragmented page file the least of your performance problems. Warrens 06:56, 5 March 2006 (UTC)
So sizing a static paging file requires some knowledge of the programs that will be running.
For 99.99% of home and corporate users simple rules of thumb about static pagesize work fine. Justforasecond 07:03, 5 March 2006 (UTC)
That's a dangerous assumption to make, and, quite frankly, wrong. Think this through a little more. A static page file and a resizable page file work exactly the same almost all the time... but in those rare cases where additional memory is needed, being able to expand is very useful.... perhaps it's a Windows XP machine with 128MB memory and the user is forgetful about closing applications? Perhaps it's a long-running game of Civilization 4 or some other game that eats memory like it's going out of style? Understand that many people who use Windows don't even know what memory is, much less understand virtual memory or reasonable limitations. A resized page file is not a *significant* performance hit, and is certainly preferred to having your application, or Windows itself, crash because the OS can't complete a memory allocation request successfully. Having a resizable page file doesn't hurt this mythical 99.99% group of people you have spoken for, and in feasible (but rare) cases, could save them data loss.
If you're still not convinced, try this yourself:
  • From a freshly-booted system, measure out how long it takes to do a few operations that make use of the HDD. Use the performance monitor to see how much I/O activity is taking place, how much of the page file is being used, etc.
  • Create a page file that's very small but has a lot of expansion space. Reboot.
  • Use lots and lots of memory. Load every game, application, tool, media player, and document you've got handy. Again, use performance monitor to watch I/O and page file activity.
  • Watch for the balloon message indicating that virtual memory has expanded. Keep piling it on.
  • Watch the page file expand as you continue to use more memory.
  • You should now have, in theory, a fragmented page file, right? Reboot.
  • From your freshly-booted system, measure out how long it takes to do the same few operations as in the first step. Again, use the performance monitor, look at I/O activity, page file usage, etc.
What you're going to find is that there is no difference that can be attributed to anything more than margin of error. Remember, fragmentation is only an issue when you read a file sequentially, and as such, the drive head needs to move a greater distance to find blocks of data; that's not how a page file is used during regular system operation, especially not in large quantities when you're not using all available physical memory. Warrens 07:32, 5 March 2006 (UTC)
I may not have been clear -- an adjustable pagefile size *will* help out in rare cases, but if you are going with statically sized, you really don't need to have much knowledge of a particular users programs. 512MB of RAM? Make a 1-2GB page file. Simple enough, right?
I do think the adjustable size helps in *rare* cases, but consider the cases you mentioned. Users neglecting to close apps? Even if you had 10 open apps, it's unlikely they'll each require 150MB of memory. Civ 4? I'm not familiar with the its memory model, but most memory-hogging software that pretends to be reliable will attempt to use a combination of disk and memory on its own -- NOT soleley the built-in VM system. Justforasecond 16:46, 5 March 2006 (UTC)
I want to second the vote for re-working or even removal of the "myth" section. The windows/linux paragraphs clearly contradict each other, and no matter what the "truth" is, it hardly seems like encyclopedia-level discussion.
As for the points made so far, none of what has been said about windows paging in the article or discussion makes any sense. Swapfile fragmentation most certainly DOES matter. For one thing, Windows pages out unused data in the background to free up RAM to use for buffering and to make room for allocations which have not occurred yet. Discontiguous swapspace will cause the drive head to move farther and keep related I/O systems busy. Furthermore, if when pages are swapped out to make room for swapping in data from an idle app (alt-tab), there are two paging operations going on, and the time required will be directly proportional to the distance the drive head has to travel to complete all its work. These paging operations involve relatively small amounts of data, so drive head latency dominates the equation. Also, individual processes are using 50-150MB each on XP these days, so swapping is an issue even on machines with 1G of physical RAM.
I'll follow up with edits or suggestions when I'm logged in and have a real keyboard. I,m on my Zaurus at the moment. :) -- Crag 66.213.200.181 22:47, 3 June 2006 (UTC)

I also agree that the misconceptions section needs some work. It appears to be a debate, and there is not enough evidence for the explanations given. Since Wikipedia isn't a debate ground, I think it should be clear that the views expressed are by *some* people. It has been my experience that windows automatically resizes the page file to be larger regardless of the maximum size. I also agree that if windows is using 2-3x physical memory, that the biggest issue is not page file fragmentation, however, that argument is irrelevant to the page file discussion. A defragged page file IS faster than a non defragged page file, and regardless of the performance increase, this should be noted for accuracy. —Preceding unsigned comment added by 137.186.142.119 (talkcontribs) 21:39, 28 June 2006

I agree this isn't the place for debate. But there are a lot of myths out there. I used to set my page file as static. But I realised there is no need. What you should do, is set the minimum size to maximum you're ever likely to need, perhaps the same as you would set a static. Then set the maximum to something larger then that. As someone else pointed out, Windows tells you when it's increasing the size. With my config, this very rarely happens, but when it does, it's probably good that it does. Fragmentation doesn't matter much, since this should never happen except in emergencies. When I restart, the page file will go back to the minimum size and will not be fragmented (well unless it already was). All that really needs to happen is that you set the page file to a level which it rarely increases. A static size isn't necessary Nil Einne 16:34, 9 January 2007 (UTC)

Note that contents of this Virtual memory article discussed above, pertaining page files or swapping, have been recently moved to Paging article. --Kubanczyk 20:46, 4 October 2007 (UTC)

macintosh system 7 and win 3.1 virtual memory

does anyone know the details of the mac system 7 or windows 3.1 vm systems?

The mac VM system seemed to be pretty immature when I used it. I don't think there was any address-space protection. it had a bit of paging (you could extend your 8MB of RAM to 10MB and make your progs run dog slow) and must have had some OS hooks to manage this, but it was easily crashable, which is one indicator of lack of a fully-implemented vm system.

win 3.1 seemed to be mostly a glorified UI on top of DOS. did it have any vm system at all? maybe a swap file?

Justforasecond 06:12, 5 March 2006 (UTC)

is the vm debate really settled?

many computers (though not the PCs and Macs we're sitting at) do not use virtual memory systems. it slows things down, makes performance unpredictable, consumes memory, and adds additional points of failure. your anti-lock braking computer, for instance, probably does not implement a virtual memory setup.

future compilers and OSs could be sophisticated enough to obviate some of the need for virtual memory systems. address-space protection becomes unimportant if you can trust that code won't try to chase memory that it doesn't own.

Justforasecond 06:20, 5 March 2006 (UTC)

I can't imagine the memory it consumes is more than a few percent of total memory and for a general purpose desktop or server the advantages are significant. Apps don't need to worry about getting stuff out of memory the instant they have finished with it. Your right about single purpose realtime embedded systems though. VM would be more trouble than its worth there. Plugwash 16:39, 19 May 2006 (UTC)

Hi All

I dont like posting this here, but it seems that this is the only place where people knowns! I'am using 4GB of RAM on 32bit Windows XP. I use the /3GB switch who according to microsoft can allocate 3GB of RAM for user apps. I use fixed page file on separete partition with 5GB size. I talk about rendering process with very big resolution. Even with those settings my PC is running out of virtual memory. Next thing that I'll do is to get page file partition bigger but I was thinking about getting the best out of it. So I thouth about the format of that partition. It will be for sure NTFS, but I was thinking about clusters. What cluster size will be great for the translation? I gess that because of the 32bit addressing of RAM, the default 4K cluster is great but I'am not sure at all. Please, someone who knows - give me a hand about this. Many thanks. —This unsigned comment was added by Hepo (talkcontribs) .

First of all, a separate partition for a Windows pagefile is going to be detrimental to performance, and creates an artificial limitation where none is needed. You're forcing the drive heads to move further and do more work.
Second, your best bet is probably to move to 64-bit Windows. Generally speaking, there is a 4GB limit on pagefile size per partition on 32-bit versions of Windows, though you can use a method documented in MSKB237740 to put multiple page files on a single partition. You will need 64-bit Windows if you want to create larger pagefiles. Your rendering application may have a 64-bit version available as well, which will allow the application to use much more than 3GB of memory, physical or otherwise. If upgrading to the appropriate hardware isn't financially feasible, get another *fast* HDD (10k RPM SATA or 15k RPM SCSI) and put an additional pagefile on that drive. Windows will split pagefile activity between drives to derive the best performance, so this is a much better solution than having multiple pagefiles on a single drive.
Third, cluster size is basically meaningless in the context of pagefile access. Windows uses the space allocated for the pagefile in special ways to squeeze out the best performance, and changing the cluster size isn't going to help. Warrens 04:39, 1 April 2006 (UTC)

Note that contents of this Virtual memory article discussed above, pertaining page files or swapping, have been recently moved to Paging article. --Kubanczyk 20:46, 4 October 2007 (UTC)

Thanks Warrens

Again thanks for the replay. There is a lot of factors that I didn't mentioned about my situation. You are right about the 4G maximum limitation of 32bit Windows (I overstate that). I have 64bit Win with my new workstations, and the thing is that those 32bit are with OM Windows version (with other words there wont be upgrade for them), and I still want them to serve me as well as the new ones. About the seperate partition for swaping, Micro$oft says that is best for performance because of the bussiness of the system drive, and my swap drive is next to the system one. I store my "ready-data" on file servers so the hard drive serve only to Windows. In case that cluster size is meaningless I gess that there is nothing else that can be done - I have to go to a upgrade. Thanks for the multiple page files article, I wasn't aware about that(maybe some day in desperate need I'll try it :)). Thank you so much again - There must be more people like you on this world. Best Regards. —This unsigned comment was added by Hepo (talkcontribs) .

Hey Hepo -- you might want to look into your apps and make sure you don't have a memory leak. If the apps keep using more and more memory for no obvious reasons, or if they don't seem to use less memory when they aren't being used much you might have a prob. Justforasecond 01:35, 2 April 2006 (UTC)

I will. In fact this is a new version with who those problems come up. The thing is that this peace of software swaps everything that cat be swaped - it desperately wants 64bit OS I guess. Thaks again. I'am thinking now about system managed page file - it seems that fixed may be a problem as well. Wish you Greats. —Preceding unsigned comment added by 82.199.204.100 (talkcontribs) 03:44, 3 April 2006

Note that contents of this Virtual memory article discussed above, pertaining page files or swapping, have been recently moved to Paging article. --Kubanczyk 20:46, 4 October 2007 (UTC)

Contradictory lead and sections (Split Suggested)

The term virtual memory is often confused with memory swapping, probably due in part to the Microsoft Windows family of operating systems referring to the enabling/disabling of memory swapping as "virtual memory"[citation needed]. From Windows 95 onwards, all Windows OS versions use only paging files. In fact, Windows uses paged memory and virtual memory addressing, even if the so called "virtual memory" is disabled.

I agree that that "virtual memory" != "swapping". Yet, later, we have specific implementations of "memory swapping" in popular operating systems. I cleaned them up a bit, but these sections strike me as completely useless (I removed a whole hell of a lot of "to change your swapfile, do this" already).

If nobody objects soon, I'm going to just flat out remove this and remove specific references throughout the entire article to swapping functionality. Thanks, Windows. --JStalk 20:17, 25 August 2006 (UTC)

I heartily agree with you, references to memory swapping should be removed from this article entirely and moved into a separate article ("memory swapping" is currently just a redirect to virtual memory, which is just plain wrong). I made a brief attempt to clarify it a while back when I re-wrote the introduction, but it needs some extensive work to separate into two articles, and thus far I have been too lazy to do this myself. Guinness 16:09, 28 August 2006 (UTC)
Jed, I'm going to revert your entire contribution, as you introduced some brutally bad factual errors, while also removing factually correct and relevant information. Swapping is a -completely- incorrect term to use w/r/t Windows NT in any form. If you don't know that, frankly, you shouldn't be writing about virtual memory on Microsoft Windows. -/- Warren 16:31, 28 August 2006 (UTC)
Christ, pal, easy. I attempted to simply distill the information that was already there into a more acceptable format. I won't claim to be an expert on the matter, but what makes you say swapping is an incorrect term? As I was taught way back when in Nerdery 101, swapping was the process by which pages that were no longer used were flushed to disk and the physical memory freed. Am I wrong?
I strived to work with the "facts" (or not) that were already on the page, not add any information. About the only information I see on your revert that I added is the bit about moving or deleting the swapfile. I stand by my edit. Perhaps the only line I may disagree with in hindsight is:
The Windows platform implements virtual memory as a hidden "swap file".
"Virtual memory" there was a bad choice of words, I agree. How about we go through on a case-by-case basis and you tell me what factual errors I introduced from content already on the page before slashing at me with a reproachful attitude. It's detestable, how you come off -- please don't bite the newcomers, indeed.
If I introduced factual errors, I apologize, that was not my intention. My intention was to remove the unwelcome content on the page. --JStalk 02:17, 29 August 2006 (UTC)
Swapping, as applied to Windows (I consider Peter Norton an authority on anything computing -- his long and diverse contributions to programming as a whole evidence this.) So how is swapping an incorrect term to use with reference to Windows? --JStalk 02:25, 29 August 2006 (UTC)
I too am against the line The Windows platform implements virtual memory as a hidden "swap file". It can be rephrased as To provide the larger address space, Windows uses a hidden "swap file", or Windows uses a hidden "swap file" to act as an extension to the physical RAM, with some detailing on what and how the extension works. --soumসৌমোyasch 07:28, 30 August 2006 (UTC)
That Peter Norton article is from 2000. He was almost assuredly writing about Windows 9x at that point, because very few people outside from businesses were using NT 4 or 2000 back then. Yes, the term "swap file" is appropriate for Windows 9x, but it is not for NT-based operating systems. You can try doing a Google search on "page file site:microsoft.com" and compare it with "swap file site:microsoft.com" to see a pretty clear delineation between what OSs the terms are used with. With that said, back in the 1990s it was common for people to call NT4's paging file a "swap file", even amongst Microsoft employees, because Windows 3.x & 9x was far more popular at the time, and the term "swap file" had a lot more traction.
Anyways, if you're looking for an authoritative source on accurate, technical information about Windows NT and its descendants, Norton isn't your man. These days he's a book author first, and a technologist second. Instead, pick up the book "Microsoft Windows Internals" by Mark Russinovich and David Solomon; it's a fantastic, well-written book and it digs deeper into the real guts of Windows better than anything else out there. Russinovich is well known for his Sysinternals line of tools, which you may have heard of, and he was recently hired by Microsoft to work on the Windows kernel... so yeah, he knows his stuff. Of interest to this discussion is Chapter 7 which covers memory management in eye-watering detail. It's the single largest chapter of the book at 110 pages! While this article isn't really the place to go into similar levels of detail, it's quite clear that "swap" is not part of the modern nomenclature, and our summation of Windows' virtual memory system needs to reflect that accurately. -/- Warren 11:45, 30 August 2006 (UTC)
You chose to completely ignore your allegation that I introduced factual errors, instead slamming Peter Norton. I am beginning to notice that you are acting uncivil and in bad faith.
My response to you is not appropriate for this talk page any longer, and I will post the completed response on your talk page. --JStalk 22:53, 30 August 2006 (UTC)
It's not worth your time to get offended that I'm pointing out a source of information on the subject that is far more qualified on the subject than Peter Norton is. Don't take it personally, it's the truth... accept it and move on. Now do you really need me to describe your contribution in depth to point out the glaring factual errors? Okay, let's do that, but you really aren't going to like it:
Sentence 1:The Windows platform implements virtual memory as a hidden "swap file" on disk.
No it doesn't. Virtual memory is implemented as described in the rest of the article; the CPU and OS share responsibilities for presenting a contiguous address space to applications. That address space can be backed by a page file or a swap file, but that is only a part of the bigger picture.
Sentence 2:Through the versions of Windows, this file has moved and been renamed several times.
Twice -- and it depends on how you want to count it. It is called 386SPART.PAR in Windows 3.x and WIN386.SWP in Windows 9x. The paging file in NT has always been called PAGEFILE.SYS; it's never been renamed in that line of operating systems. The text you deleted made this point fairly clearly.
Sentence 3:Moving it or deleting it while the system is running (or sometimes even outside of the system) is often a cause for drastic error.
It is actually impossible to remove or change the location of the page file while Windows is using it. If the file is deleted while the OS isn't running, a new file will be generated next time the OS boots; the only circumstance in which the OS will fail at this point is if it is unable to create that new paging file (full HDD, e.g.).
Sentence 4:(Windows XP, however, will regenerate the swap file at boot should it be deleted while Windows is not running.)
Correct, but previous versions of Windows do this too... and it's still not called a swap file.
Sentence 5:In Windows XP, virtual memory was improved by allowing page files to reside on multiple drives.
Ignoring this inaccuracy that virtual memory is the page file, this is not a feature new to Windows XP. Multiple page files were possible in NT 4.0; possibly 3.51 too, but my reference manuals on that version are packed in a box right now so I can't check easily.
Okay? Are we clear on all that? If you still want to go on finger-pointing and claiming "incivility" and "bad faith" instead of simply accepting that the article had it right and you had it wrong, that's your choice, but it's not a good use of your time. Instead, go track down that book I mentioned earlier and get reading. I wouldn't be taking the time to explain this if I wasn't absolutely certain that the weight of evidence out there didn't support it. -/- Warren 23:43, 30 August 2006 (UTC)
Since you decided to leave it here, I'll respond here. I am offended with you because you made a personal attack at me about my level of knowledge and familiarity with a specific piece of subject matter, due to the wording I used. Your defense of that is also completely incorrect, as I will prove in the following essay.
Let us, for the purposes of this essay, say that you own a Ford car. One day, you decide you would like a Toyota instead, because you have heard great things about Toyota automobiles. So, you take out your tools, go outside, and remove all Ford logos from your car. You then spraypaint "Toyota" all over the car and add Toyota logos to replace the Ford ones. Being proud of your accomplishment, you begin to tell your friends that your car is a Toyota.
Needless to say, your friends are going to look at you like you are an idiot.
Your automobile is still a Ford. It drives like a Ford, it looks like a Ford, and it is most certainly registered with your motor vehicle bureau as a Ford, regardless of what you may think it is.
This same situation is playing out with the Windows swap file. With the release of one of the Windows versions, the developers renamed the swap file from win386.swp to pagefile.sys. This has caused many Microsoft people, notably a MS-MVP named Alex Nichol[1] and scores of people that read Microsoft documentation, to call the swapping file the "page file". You can call that Ford a Toyota, that does not imply that it is no longer a Ford.
Now, that is all well and good. I would not care so much what the file is called if it did not bleed into technical discussions about the underlying mechanism.
For some reason, you have thrown quite a big accusation at me, that of "introducing factual error" when you reverted my edit to Virtual memory. In defense of your accusation and revert, you stated that:

Swapping is a -completely- incorrect term to use w/r/t Windows NT in any form. If you don't know that, frankly, you shouldn't be writing about virtual memory on Microsoft Windows.

Let us begin unraveling your claim with an introduction to both paging and swapping. I am tired of people that have not written a line of working software telling me what I know. I just cannot keep civil with you, and I apologize for that.
Paging
Paging is a feature of modern processors, the IA-32 series included, that allow the linear address space of the processor to be mapped to a series of 'pages' (most commonly 4,096 bytes in size, but there are a few options) at the operating system's discretion. Through a complex translation, virtual addresses referring to these pages are translated to physical addresses in memory using a variety of mechanisms before being sent out on the address line. This is the basis of "Virtual Memory", also known as "Paging".
In short, addresses specified by applications (called logical addresses) are translated in hardware to an actual, physical address (called absolute addresses). This allows pages containing code and data to be moved around with no impact on applications; to the application program, paging is completely transparent. On the operating system side, paging is implemented via a series of page tables in memory that the OS sets up. Access restrictions can also be implemented in hardware at OS discretion. From the Intel Architecture Software Developer's Manual, chapter 3, section 6:

When paging is used, the processor divides the linear address space into fixed-size pages (of 4 KBytes, 2 MBytes, or 4 MBytes in length) that can be mapped into physical memory and/or disk storage. When a program (or task) references a logical address in memory, the processor translates the address into a linear address and then uses its paging mechanism to translate the linear address into a corresponding physical address.

If the page containing the linear address is not currently in physical memory, the processor generates a page-fault exception (#PF). The exception handler for the page-fault exception typically directs the operating system or executive to load the page from disk storage into physical memory (perhaps writing a different page from physical memory out to disk in the process). When the page has been loaded in physical memory, a return from the exception handler causes the instruction that generated the exception to be restarted.

But wait a minute! There's disk storage in there! That means it is time for our next section...
Swapping
Swapping is the process used by some operating systems to flush unused pages to disk to make room for others. After the page is saved to disk (called "swapping out"), the page is marked as "not present" in the operating system's or program's page table (described above). To the processor, this is not a concern, because it is not using the data in that page at all.
When that page is eventually used, however, here is the short progression of steps that happens: 1. The instruction in question references the virtual address of a page that has been swapped out. 2. The processor freezes the instruction and begins translating the virtual address. 3. The processor determines that the virtual address maps to a page (via the page table) that has been flushed to disk and is not present. 4. The processor generates a #PF (page fault), which is a cue for the operating system to "swap in" the page (load it from disk). 5. The operating system's #PF handler loads the page from disk into physical memory -- possibly in a different location -- and updates the page table. If the #PF handler was invoked with an invalid address, this is where operating systems generate an "Invalid Page Fault" error (Windows' STOP). 6. The #PF handler returns, indicating to the processor that everything is fixed, and the processor reevalutes the task's current instruction that was frozen.
This process is completely transparent to application developers. It is called swapping, always has been (long before Windows existed), and always will be.
The Nomenclature
You said swapping is an incorrect term to use for Windows NT, because, quote:

[...]it's quite clear that "swap" is not part of the modern nomenclature, and our summation of Windows' virtual memory system needs to reflect that accurately.

"Page file" is a Microsoftism that they seem to have adopted. Swapping is implementable in operating systems without using the paging mechanism of the processor (it just requires more work). Tying swapping to paging is a mistake on Microsoft's part, as they are two independent processes. It is a Microsoftism.
The Microsoft Windows "page file" is a swap file. And you can give me the riff-a-roo about introducing "factual errors" because I prefer to stay with the computer science term, and I'll respond just as I am now.
How can a term describing a process (potentially) completely independent of paging not be part of the modern nomenclature? Microsoft calls shared libraries "Dynamic Link Libraries", that does not mean they are not shared libraries. I feel in an encyclopedia struggling to stay unbiased one way or the other, letting a Microsoftism such as "page file" slip into any writing on Wikipedia is an admission that we accept said Microsoftism. I don't care if we're talking about Windows or turkey basters; Microsoftisms are not Wikipediaisms, under any circumstances. For those reasons, I feel Swapping, Thrash (computer science), Mapping, Virtual memory, and Memory management need attention on this issue, just to name a few.
I should be writing about virtual memory, regardless of what you think, Warrens. Because of Microsoftisms like that, articles like Virtual memory are turning horrible. --JStalk 00:30, 31 August 2006 (UTC)
Oh, and, you can release the kernel locks on the swap file programmatically if you know the API to touch and are in the right Windows subsystem, you will just crash your machine -- I'd proof of concept it, but I'm not in the mood and it would require me to dust off my C expertise.
Although I agree the bit about moving the file while the OS was running was a bit much. Terms like "impossible" are a bit strong, though. --JStalk 00:35, 31 August 2006 (UTC)

First of all, car analogies have no place in a discussion about virtual memory. Let's stay focused. It wastes your time writing it; it wastes my time reading it and trying to understand what the heck you're trying to say.

Second, the fact that you are linking to an article which more or less perfectly restates what I've already said -- and what the article itself has said for a very long time -- makes me wonder why you're arguing this so much. Is it because you don't like being told you're wrong? You blew away factually correct information in favour of factually incorrect information, and you got called on it; believe me, I can understand why you'd be pissed off, but don't take it personally... consider it an opportunity to correct false presumptions and to learn something.

Third, regarding this:

I feel in an encyclopedia struggling to stay unbiased one way or the other, letting a Microsoftism such as "page file" slip into any writing on Wikipedia is an admission that we accept said Microsoftism.

You really, really need to read Wikipedia:Neutral point of view. Slowly and carefully. Don't even bother contributing to Wikipedia again until you've done this. I'll quote the second half of the very first sentence of Wikipedia's NPOV policy here, because it's relevant to the mistake you're making: "(Articles) must represent all significant views fairly and without bias." What this means is, you as an editor can't declare a term to be a "Microsoftism" and thus render their terminology invalid and not suitable for inclusion in an article. Microsoft is the #1 operating system vendor in the world; their Windows NT implementation of virtual memory and paging exists on over half a billion computers, and that number grows every day. Accordingly, what they name a technology carries a lot of weight. If Microsoft calls it a page file -- and they have that right, since it's their creation -- then we report it as a page file. End of discussion.

Wikipedia isn't here for you to espouse your opinion on how Microsoft got their naming wrong. Go start a blog or something if you want to do that. -/- Warren 12:12, 31 August 2006 (UTC)

Note that contents of this Virtual memory article discussed above, pertaining page files or swapping, have been recently moved to Paging article. This time it was backed with reliable sources. --Kubanczyk 20:46, 4 October 2007 (UTC)


Virtual Memory's Real definition

Virtual Memory is method whereby the Operating System uses the hard drive as it though were RAM when OS is low on RAM. The data stored on the hard drive is called Swap Page or Page File.—Preceding unsigned comment added by 192.234.16.2 (talkcontribs)

No it isn't. What you're referring to is "Memory Swapping" This is a common mistake resulting from Microsoft's referring to the enabling of Memory Swapping incorrectly as "Virtual Memory" The article's definition is correct. Windows uses virtual memory, even if the so called virtual memory is switched off; turning this off in fact turns off the memory swapping. Guinness 10:49, 11 October 2006 (UTC)
Pretty well answered. Here in my company we've setup a written test for applicants to computer related positions. This particular question, "What is virtual memory", was never answered correctly. People keep saying: "A technique to extend real memory", "The use of harddisk paging file as memory", etc. The wikipedia definition is the correct one, but it is a bit too long. There's a pretty neat definition i've stumbled on the net, which is even more concise and as accurate as the wikipedia one:
Addressable space that appears to be real storage. From virtual storage, instructions and data are mapped into real storage locations. The size of virtual storage is limited by the addressing scheme of the computer system and by the amount of auxiliary storage available, not by the actual number of system memory locations. Contrast with real memory. Synonymous with virtual storage.'
I love the idea that your company is asking the question about virtual memory. Too often many people have no clue or they do not care about it because we do not use everyday at our job. I would be a bit careful though about being too critical on the answer if this is the only way you are asking the question. An easy question to ask someone to see if they know the difference between the paging file and virtual memory is to ask them to determine how much of the paging file is being used. Or to ask is What value is displayed to you in the PF Usage field in the Task Manager. This may get better results for your questions without making it so open ended- you could also ask them to describe the difference between protected, virtual and real memory.

www.ncsa.uiuc.edu/UserInfo/Resources/Hardware/IBMp690/IBM/usr/share/man/info/en_US/a_doc_lib/aixuser/glossary/V.htm Loudenvier 12:50, 11 October 2006 (UTC) I would like you all to agree on some fundamentals, that more-less: virtual memory - a mechanism allowing OS to use other devices other than physical RAM as RAM too; swapping - a mechanism allowing OS to transfer data between virtual memory device and physical RAM; paging - a mechanism allowing OS to address both physical and virtual memory device data in a unified manner; pagefile/swapfile - (looks like it is the same, Windows users like pagefile, UNIX/Linux like swapfile as per historical naming conventions) - a filesystem representation of data stored in virtual memory - refers to devices with filesystems only e.g. HDD, removable flash drives (useless, but doable). Then once you agree on these fundamentals, you may start to discuss details (and maybe you stop indulge yourselves)... Maybe for some confused people, it would be nice to differentiate between RAM and storage or just point them into right directions...My 2 cents. BTW, the Windows section is kind of wrong. [Yellow01]

People who do know what they're talking about do agree on the terms (although your definitions are somewhat sketchy right now). It's the clueless people who keep equating virtual memory with swapping, and the article is currently fairly bad at making that difference clear (and keeps blurring the line). As long as nobody fixes, splits and clears up the article, we just have to tolerate and ignore these people. -- intgr 09:16, 28 November 2006 (UTC)
"virtual memory - a mechanism allowing OS to use other devices other than physical RAM as RAM too;" - Again this is wrong. Virtual memory or to give it its full title "Virtual Memory Addressing" is simply the techinique whereby non-contiguous memory blocks are presented to an application. It is entrely independant of the physical memory, whether that be volatile ram, magenetic disk, or hell, even punch cards could be addressed virtually. The fact that VMA is commonly used in conjunction with swapping is neither here nor there, they are both two distinct technologies, and they can both be utilised with or without the other. (Intgr - totally agree, I've been saying for months that I want to re-write them both, but haven't yet had the time; maybe I'll find time over christmas, unless someone beats me to it). Guinness 09:05, 18 December 2006 (UTC)

I don't personally think Logical Address should be joined into Virtual Memory until we've fixed the Virtual Memory a bit. toresbe 09:15, 3 December 2006 (UTC)

I am new to editing, so here is my attempt at describing virtual memory Eric 20:03, 8 January 2007 (UTC)

The memory pages of the virtual address space seen by the process, may reside non-contiguously in primary, or even secondary storage.

Virtual memory or virtual memory addressing is an addressing scheme that requires implementations in both hardware and software.

The hardware must have two methods of addressing RAM, real and virtual. In real mode, the memory address register will contain the integer that addresses a word or byte of RAM. The memory is addressed sequentially and by adding to the address register, the location of the memory being addressed moves forward by the number being added to the address register.

In Virtual mode, memory is divided into pages usually 4096 bytes long. These pages may reside in any available ram location that can be addressed in Virtual Mode. The high order bytes in the memory address register reference tables in RAM at specific locations low in memory and they are addressed using real addresses. The low order bytes in the address register are an offset of up to 4096 bytes into the page ultimately referenced by resolving all the table references of page locations.

The size of the tables is governed by the computer design and the size of RAM purchased by the user. All virtual addressing schemes require the page tables to start at a fixed location low in memory that can be addressed by a single byte and have a maximum length determined by the hardware design. In multi tasked systems with more than one user, the tables further down the chain of arrays will be duplicated for each user and can reside in any location that can be addressed by the real mode of addressing.

In a typical computer, the first table will be an array of addresses of the start of the next table and the first byte of the memory address register will be the index into the array. Depending on the design goal of the computer, each array entry can be any size the computer can address.

The number of tables and the size of the tables will vary by manufacturer, but the end goal is to take the high order bytes of the virtual address in the memory address register and resolve them an entry in the page table that points to either the location of the page in real memory or a flag to say the page is not available.

If a program references a memory location that is within a page not available, the computer will generate a page fault. The will pass control to the operating system at a place that can load the required page from auxiliary storage and turn on the flag to say the page is available. The hardware will then take the start location of the page, add in the offset of the low order bytes in the address register and access the memory location desired.

All the work required to access the correct memory address is invisible to the application addressing the memory. If the page is in memory, the hardware resolves the address. If a page fault is generated, software in the operating system resolves the problem and passes control back to the application trying to access the memory location.

This entire scheme provides two major features to the computer user:

1. Applications can use more real memory than is installed in the computer. At some point, if the application is using much more memory than is available in real mode, the number of page faults will degrade system performance. The actual maximum usable ratio of real to virtual memory will depend on the application and the order is uses to address memory. 2. The system can provide total memory isolation between users and applications: by maintaining separate page tables for each user; memory used by one user is invisible to memory used by other users, since they each have their own page tables. There is overhead with this technique, since page tables have to be loaded and saved every time there is a context switch to a different user.

--Eric 20:03, 8 January 2007 (UTC)

I think the current first paragraph is better than yours. You are correct in the detail, however this describes how virtual memory is implemented. The first para should stick to the definition and implementation detail should follow. Guinness 03:07, 14 January 2007 (UTC)

I am new to trying to edit anything here, so I went back and re-read the current first paragraph. I don't think it is an accurate definition: it defines it as a technique used by operating systems and does not talk about the hardware. Any definition needs to describe both the hardware and software implementations, since without a hardware implementation, virtual memory would be impossible to implement. In addition, it refers to “more commonly used in multitasking.” Although this is true, it has nothing to do with a virtual memory definition and is misleading to a novice.

Perhaps I should have started with the following definition: Virtual memory is an addressing scheme implemented in hardware and software that allows discontiguous memory to be addressed as if it is contiguous. The technique used by all current implementations provides two major capabilities to the system: 1. Memory can be addressed that does not currently reside in main memory and the hardware and operating system will load the required memory from auxiliary storage automatically without any knowledge of the program addressing the memory. 2. In multi tasking systems, total memory isolation can be provided to every task except the lowest level operating system --Sailorman2003 19:29, 29 January 2007 (UTC)

I do agree with and like this definition. -- intgr 07:44, 30 January 2007 (UTC)

The first paragraph of the background that I added needs a description of what happens when you add an integer to the memory address register in Virtual Mode. I am too tired to do it tonight and I am open to suggestions on the wording.--Eric 02:53, 5 February 2007 (UTC)

"Separate swap partition for Windows"

I removed this paragraph:

Also, though it is not very common for Windows users, it is possible to use a whole partition of a HDD for swapping, just like most of Linux users are used to do (see below). By using a separate swap partition, it can be guaranteed that the swap region is at the fastest location of the disk. On HDDs with moving heads, this is generally the center.

First of all, modern Windows operating systems don't use the word "swap"; any text that uses this term is immediately suspect. Second, the notion that the "center" of a disk is fastest is unsupportable by fact. Third, raw page file performance is, most of the time, not relevant, because there are usually multiple I/O requests going at the same time, resulting in the heads doing a lot of moving around. Page file access is almost never sequential in nature, when reading -or- writing, so any kind of performance benefits that can come from having it located at the fastest part of the drive is nullified. Fourth, when Windows Setup runs, it generally tries to put the page file in a place that's pretty fast anyhow; what usually ends up happening is it gets put near the operating system files, which is actually a pretty good way of reducing the sheer amount of drive-head movement in a heavy paging situation. Separate page file partitions tend to put the page file further away from the operating system files and user data, thus reducing performance. -/- Warren 13:36, 8 February 2007 (UTC)

dDeliberate non-sense and mis-information. The file win386.swp (win 9x/Me) doesn't behave in the way you describe, nevertheless it applies to pagefyle.sys (winNT/2000pro/XP). You also removed some sentences that previous to my recent edits belonged to the Linux section. Would you mean that this method is a non sense in Linux environments too?--Dr. Who 13:48, 8 February 2007 (UTC)
ok, listen, I left this article and restored as it was previous to my edits, I blanked my user page and i hope you will rest well, I was not planning to become a nightmare for the lots of American/British/Commonwealth arts/science/technology gurus that are here under many umbrella nicks, so I'm leaving. Dr. Who 14:15, 8 February 2007 (UTC)
"Second, the notion that the "center" of a disk is fastest is unsupportable by fact."
Although I cannot be bothered to spend time looking for reliable sources right now, this is a very common and accepted fact among people who deal with disk storage. If you have time to kill, you can refer any hard disk review for benchmarks for evidence. For example, refer to the minimum/maximum transfer rates diagram of this review: [2], and specifically the transfer rate/offset decay graph [3].
And I would really rather not start another jargon debate, but I cannot see how using "swap" in relation to Windows is wrong. The word doesn't magically change its meaning — swap is still swap, whether on Windows, Unix or $yourFavouriteOS. -- intgr 15:33, 8 February 2007 (UTC)
Disk performance is governed by the rotation speed, the seek speed, and the data width (assuming we are not talking about RAID devices). Since the disk is a rigid platter and all parts rotate at the same speed, the probability of the arm reaching a track just after the home position paqssed the head is the same no matter what track is being accessed. By the same logic, the average time required to reach the desired sector for any given track is the same due to the constant rotation. speed.--Eric 18:21, 21 February 2007 (UTC)
That's correct; I was forgetting that swap performance is generally dominated by disk seeks (though inner tracks are definitely faster for sequential reads). -- intgr 18:38, 21 February 2007 (UTC)
Angular velocitylinear velocity--Doktor Who 22:45, 22 February 2007 (UTC)
Velocity is not important here at all. The problem with swap is that primary storage is generally assumed to be random access memory, e.g., requests to any address are assumed to take constant time, so data is often scattered around near-randomly. However, random access order is the worst possible order for sequential access memory devices. On average, hard disks without command queuing will have to wait a little more than half a rotation on every seek. On a 7200RPM disk, this means that if your requests are shorter than ~500 kB (inner tracks) or ~330kB (outer tracks), the disk spends over 50% of time seeking. While operating systems most likely implement some kind of readahead to swap in more than a single page at a time, the request size is probably still short. Hence, the performance will be dominated by disk seeks and ultimately, the difference in raw throughput will be negligible. -- intgr 07:30, 23 February 2007 (UTC)

I'm responsible for the "On HDDs with moving heads, this is generally the center" wording, and it was misunderstood because I wrote it too quickly. I have replaced the restored wording about the beginning of the disk with "generally the center cylinders between the inner and outer edges of the disk (except for disks with fixed heads)" - that is, on a 100-cylinder disk, "center" means cylinder 50, not the innermost cylinder. It's a well-documented fact in system performance literature, going back over 30 years, that seek time almost always outweighs all other considerations for placement of files on disks, the only exceptions being when you can eliminate it entirely (e.g., fixed heads, non-volatile cache). Yes, on some drives there are different data densities between the innermost and outermost tracks, but seek time continues to overwhelm all other aspects, even that one. RossPatterson 23:46, 21 February 2007 (UTC)

It is certainly true that in a 100 cylinder disk, data on cylinder 50 guarantees that you can't seek beyond 50 cylinders and data in the middle third of the disk will most likely be accessed with the head crossing the least number of cylinders. My experience with disks is old, but the last time I dealt with them, the bulk of the seek time was in the acceleration/deceleration of the arm. Once the arm was at speed, the distance of the arm movement was a smaller proportion of the total seek time.
The primary factor governing paging performance is thrashing caused by the working set being too large in relation to the size of RAM. Second to that is the accuracy of the algorithm that selects the page to swap and third is the number of dirty pages that have to be written before they can be re-used.
The speed of the disk determines the efficiency of the single server queue that is usually the case on small computer systems. It is rare that a multi server paging queue is available. So, the faster the disk, the greater number of page faults that can be serviced without increasing the likelihood that the queue will grow too large; thus, enabling a larger working set for a given RAM size when a faster disk is used.--Eric 19:50, 22 February 2007 (UTC)

Note that contents of this Virtual memory article discussed above, pertaining page files or swapping, have been recently moved to Paging article. --Kubanczyk 20:46, 4 October 2007 (UTC)

Recent "sizing virtual memory" section

I removed this section recently added by User:Jcea:

Historically, operative systems required to be configured to use not swap space at all, or at least the same space than RAM available. The reason was that main memory (RAM) was considered a cache for swap space. So, if you had 64 MB of RAM and no swap, your applications were limited to 64 MB, but if you configured 128 MB of swap, your applications could use 128 MB.
So, an usual rule of thumb was used: if you needed swap, use double swap space than RAM. This would double virtual memory usable for applications, while keeping thrashing under control.
Current operative systems provide virtual memory as the sum of physical RAM plus the swap space, so you can configure swap size less than system memory. Thrashing risk is also reduced because current paging algorithms are more clever, disks are faster and main memory is bigger.

Because I think it's factually wrong and it doesn't cite any sources. First, I have yet to hear of an operating system that would duplicate a significant proportion of its in-memory storage in the swap space; certainly not around the time when computers started reaching 64MB of main memory (there were some with a single-level store, however it has little to do with the concept of swapping). Second, it repeats the popular misconception that virtual memory is merely "RAM + swap space". Finally, stating that disks are getting faster and memories are getting larger is useless; what does matter from the performance aspect is the difference between their growths. Less swap space is being used since the performance of hard disks can't keep up with the performance of primary storage; page replacement algorithms are more critical than ever only for disk cache concerns. -- intgr 23:02, 8 March 2007 (UTC)

Note that contents of this Virtual memory article discussed above, pertaining page files or swapping, have been recently moved to Paging article. --Kubanczyk 20:46, 4 October 2007 (UTC)

Multics

Virtual memory was invented at MIT during the early 60s. It was implemented in the Multics project - a spoof on Unix.

Wrong, Unix was a spoof on and inspired by Multics. Ken Thompson is on the record about it. RossPatterson 02:47, 14 June 2007 (UTC)

The original purpose was two fold: Provide more real memory than actually exists and provide a virtual address space to multiple users. The virtual address space allowed multiple process to run simultaniuosly with extended isolation so that a problem within one address space would not cause trouble to a process using another virtual address space. This technique was adopted by IBM in its VM operating system and was copied by Unix and Windows. The idea was to provided a Kernel shared by everyone in read only RAM and provide a Virtual Address space that began at the end of the kernel that was read/write. Every user shared the same kernel and had their own Virtual Address space.

Actually, VM/370 and its predecessor CP-67 were preceeded by TSS/360, which was the first IBM virtual storage system. If Unix and Windows copied virtual memory from IBM, it would have been from OS/VS, not VM, but Unix would actually have gotten it from other PDP systems, since virtual memory was added in the 4BSD releases. RossPatterson 02:59, 14 June 2007 (UTC)

The page table concept was used to provide discrete page tables for each user. The same hardware concept is used is all cases. —Preceding unsigned comment added by Sloop (talkcontribs) 11:51, 18 June 2006

Wrong. Atlas at Manchester University in the UK and Fritz-Rudolf Güntsch pioneered VM in academia. —Preceding unsigned comment added by 86.134.61.255 (talk) 16:26, 11 September 2008 (UTC)

First sentence of overview

"Hardware must have two methods of addressing RAM, real and virtual" seems clearly wrong. Lots of small processors run with just real addressing.

Fastest location of the disk - overly simplistic rule for swap placement

In the Swapping in the Linux and BSD operating systems section, it says: ... it can be guaranteed that the swap region is located at the fastest location of the disk which is generally the center cylinders between the inner and outer edges of the disk.

I can see what the author is trying to say, but I think it's a bit simplistic and can be misleading. There are two aspects to hard disk performance: transfer rate and seek time. On a modern disk, the transfer rate is always highest on the outer cylinders which correspond to the first logical blocks on the disk. This is because there are more sectors per track on the outer cylinders, while the disk spins at a constant speed. The seek time depends on how far the head has to move. Statistically this will be lowest when the target cylinder is close to the other data, which would be the middle of the disk if the disk was entirely used and the access pattern was equally distributed.

So, from a transfer rate point of view, it is best to put the swap space at the start of the disk, and from seek-time point of view it should be close to the regularly accessed data (which could be the middle, but may not be). Royhills 08:33, 8 August 2007 (UTC)

I agree, this statement is dubious and sounds like a myth from the earlier days when head movement still was a significant factor in seek times. I also removed the "file system fragmentation" claim since that doesn't apply when swap space is pre-allocated on an empty file system.
For sequential reads, the inner tracks are generally fastest due to higher data density (not the outmost), but sequential reads are rare when swapping anyway. The rationale for positioning swap in the middle is probably that the seek time from either "edge" is the smallest; however, it's unlikely to make much of a difference, since as far I can tell, "normal" disk accesses would be rare when the operating system is already busy swapping -- the majority of time is spent seeking between swap pages. In this situation, the seek time of today's hard drives is bound by rotational latency rather than head movement. That is, waiting for the platter to rotate into place, rather than waiting for the head to move into place. The average rotational latency is invariant regardless of the placement of data, and depends only on the spindle speed, for example, 8.3 milliseconds for 7200RPM disks.
Anyway, I can't pretend to be the know-it-all, but claims like these should not go in without a reasonable rationale from a reliable source. -- intgr [talk] 16:38, 9 September 2007 (UTC)
I do not understand your reasoning for higher data density on the inner tracks rather than the outer ones. The outer tracks are capable of storing many more sectors than the inner ones, as well as having the highest linear velocity. Therefore, in the same amount of time (1 rotation for instance), you would be able to read several times more data from the outer track than from the inner one. And in the end its the read speed that matters... Aurimas 20:20, 10 September 2007 (UTC)
You're right, outer tracks are faster than inner ones, I got it the wrong way around somehow.
But when you're swapping so much that its performance actually matters, sequential read speed makes little difference because the majority of time will be spent in disk seeks rather than reads. I don't have any benchmarks or stats for how long the average read would be under a heavy swapping workload, but it is unlikely to be long — disks make terrible grinding noises whenever that happens.
Consider that current consumer-grade 7200RPM disks achieve read speeds up to ~60 MB/s (outer tracks) and ~35 (inner), and seek times at best 13.0 ms [4]. This means that a 128kB read takes between 2.0 milliseconds (outer) to 3.5 ms (inner). That would sound like a significant difference, but if you add that every single such read requires one seek, you get 15.0 ms versus 16.5 ms, just a 10% improvement.
A 10% improvement is not bad per se, but you can take for granted that once you run into swap thrashing, the performance is going to be abysmal anyway. Does it really make any sense to optimize 10% off a workload that should never happen in the first place? At the cost of permanently reserving the hottest tracks on a hard disk? No thanks, I would opt for an additional stick of RAM and rather install my operating system on the outer tracks, gaining a few percent in boot time. -- intgr [talk] 12:12, 11 September 2007 (UTC)

Note that contents of this Virtual memory article discussed above, pertaining page files or swapping, have been recently moved to Paging article. --Kubanczyk 20:46, 4 October 2007 (UTC)

Split discussion continued

[This discussion was moved from the #Contradictory lead and sections (Split Suggested) section.]

As everyone appears to agree that this article needs to be split, and it still hasn't been done yet, I'm tagging it with {{split-apart}}. -- intgr 19:53, 25 November 2006 (UTC)

Oppose. This article needs just work, not splitting. While I agree that virtual memory addressing, paging and swapping are three different things, 99% of people reading this article (including me) expects all 3 of them here together. Thus, this is the place to explain the differences and clear the things out. There is no limit for a Wikipedia article to describe only one tiny self-contained thing (this is an encyclopedia, not object programming). Please read a guideline to splitting pages, to learn why split is not recommended here at all (especially note that we face danger of "NT-paging vs rest-of-the-world-swapping" undesirable content fork). --Kubanczyk 10:18, 18 September 2007 (UTC)

I definitely agree that this article should cover swapping in some depth, but it should be contained within a single section, clearly separated from the rest of the aspects of virtual memory and intentionally constrained — not be a recurring theme in every part of the article. Demand paging, memory protection, memory-mapped files and shared memory are at least as important applications of virtual memory and should also get comparable coverage. And the virtual memory article should not attempt to cover any and all aspects of swapping. For example, disk fragmentation caused by swap files/partitions (which currently dominates this article) is definitely an irrelevant aspect when it comes to virtual memory. There should be a "Main article: Swap" link for that; this is a rather recurring pattern.
Having the two concepts confused, and having swap space redirect to virtual memory, as if they were the same thing is not acceptable. Virtual memory is one of the most important innovations in computer architecture history, yet most people see it as "oh, that means using the HD as RAM right?". -- intgr [talk] 10:50, 18 September 2007 (UTC)
Great, I see you want to keep Virtual memory "pure" and I agree. The cause of problem is the fact that, in this world, many users search for "Virtual memory" when their Windows systems complain to them using this wording. If you split, I they will still come here, and rest assured they won't waste their precious clicks on some strange "paging" or "swapping" links. This page will quickly regain all those useful "tips of the day". Moreover, this is backed up by guideline WP:COMMONNAME. So maybe a wiser plan:
Regarding the "NT controversy" problem:
  • Page file etc. - redirect to Swapping, add NT controversy explanation there
  • Paging - leave as is for now, add NT controversy explanation; no point in puryfing it too much, as the stupidity force will be even more overwhelming.
Good enough?
--Kubanczyk 12:17, 18 September 2007 (UTC)
I still disagree with your "virtual memory addressing" suggestion and I don't think you have a good case when it comes to WP:COMMONNAME because people in the industry and academia draw a strict line between virtual memory and swap. Making the "virtual memory" name redirect to swap would be inaccurate and would conflict with many authoritative publications on the subject; I believe WP:V can be invoked. And in my view, Wikipedia strives for academical accuracy more than it does to colloquialisms.
I think the problem can be solved well enough with proper disambiguation/clarification in the lead section. -- intgr [talk] 14:38, 26 September 2007 (UTC)
People won't bother with reading, as it's obvious for them here is the place. I understand you hereby volunteer to keep Virtual memory clean of "Windows page file" tips for the next year? I can take 2009 shift. --Kubanczyk 17:20, 26 September 2007 (UTC)
I doubt there will be enough to make a problem out of it, but if you say so. I've already got a heap of pages on my watchlist that require routine reverting due to spam or other questionable edits, it will be a drop in the bucket. -- intgr [talk] 17:34, 26 September 2007 (UTC)

Fair enough, what's the plan then?

Please comment. --Kubanczyk 17:42, 26 September 2007 (UTC)

The virtual memory article should describe swapping briefly, as it is one of the important applications of virtual memory, with a {{main}} link; but leave enough space so that demand paging, memory protection, memory-mapped files and shared memory could also be covered in comparable depth.
I'd prefer naming the article simply swapping. I am not sure what to do with the paging article — its context is not limited to swapping, but also includes paging of memory-mapped files. (as far as I can tell, "swapping" does not include that?)
The rest of your plan sounds good. -- intgr [talk] 18:13, 26 September 2007 (UTC)
Virtual memory ok. Swapping - no can do; it is already a well-named disambiguation page. Swapping (computing) or Memory swapping which one is more common?
About paging, first things first, we need a clear and sourced definitions of all the meanings of word "paging". There is already a talk between Warrens and JS here, with a lot of facts and sources probably, but it is very hostile and I haven't managed to read it yet. --Kubanczyk 14:07, 27 September 2007 (UTC)
Hmmm reading Warrens/JS was a waste of time. After checking some real sources: Memory swapping is almost non-existant, Disk swapping is more popular but sounds awkward. I've read two sources, and both agree that actually Paging is the name we want (I added citations there), not Swapping. This is probably not a Microsoftism after all. Merge suggested. --Kubanczyk 22:29, 27 September 2007 (UTC)
Moved most stuff to Paging. A lot of cleanup work is needed on both articles. Here we need demand paging, memory protection, memory-mapped files and shared memory, to repeat Intgr's suggestion. --Kubanczyk 20:46, 4 October 2007 (UTC)

Video RAM as swap space

I saw the referred-to article [5] and tried this on a headless server. While Video RAM should perform well for writes, I don't see a rationale for high-performance reads. I tested a video RAM block device using hdparm -t, which gave results 3 times slower than a swap partition on an IDE disk. The possibility of using Video RAM as swap space is interesting, but I'm not sure it's vital to the article. That performance claim should be substantiated or removed. SeanCollins 02:05, 1 October 2007 (UTC)

I don't doubt that it's much faster than hard drives — while hdparm measures the ideal maximum throughput of sequential reads, in practice, swap reads are very noncontiguous. Any kind of RAM will greatly make up for the lost bandwidth.
But I removed the whole paragraph for another reason — it's just a silly hack, nothing more. Nobody is going to build and sell computers that use video memory as swap, because that kind of defeats the purpose of having video memory in the first place. (Especially with video memory being significantly more expensive than regular RAM) -- intgr [talk] 07:49, 1 October 2007 (UTC)

Note that contents of this Virtual memory article discussed above, pertaining page files or swapping, have been recently moved to Paging article. --Kubanczyk 20:46, 4 October 2007 (UTC)

Introduction added

I added a basic introduction for non-computer literate people as the article started with a fairly deterrent technical feature listing. —Preceding unsigned comment added by Rbakels (talkcontribs) 07:37, 5 October 2007 (UTC)

Great! Could you try a similar thing with Paging? --Kubanczyk 22:28, 5 October 2007 (UTC)

I agree a popularly understood intro is a good idea, but...

I think 'trick' is a bad term to use here. Perhaps 'technique'?

Also, saying it is implemented solely by the OS and eliminating the reference to hardware is just wrong. Chromaone 20:21, 5 October 2007 (UTC)

I think "abstraction" is the word. --Kubanczyk 22:28, 5 October 2007 (UTC)

Rewrite?

I agree with the "too technical" banner at the top of this page. I also think the article goes into too much detail about specific implementations, e.g. x86. I therefore suggest a re-write:

  • Intro similar to current version but worded more simply and with simpler diagram without hex addresses. Point out that virtual memory also "fools" a lot of the CPU hardware. Link to footnote that explains why contiguous addresses are important - allows software and hardware to assume that "4 bytes starting at location 1000" means "locations 1000, 1001, 1002, 1003 in that order"), which simplifies construction of both software and hardware; perhaps edit Memory address to support this. Emphasise benefits to programmers and users (including security in MVS-like schemes). Keep the very good note that virtual memory does not simply mean paging / swapping (which are not the same).
  • How it usually works. Remove x8x-specific material.
    • Define virtual address space.
    • Page tables.
    • Dynamic address translation and TLB.
    • Handling page faults. Page fault interrupt. LRU algorithms decide what gets paged out.
    • Single and multiple virtual address spaces. MVAS increases security.
    • The need for V=R mode. The OS. I/O. Timing-dependent programs.
  • History. Perhaps present this as a table, with dates / date ranges in the left column.
    • In the beginning... keep current content.
    • Emphasise that making app programmers responsible for overlaying is expensive (even now; worse with primitive development and testing tools).
    • Multiprogramming made memory management more important - fixed partitions wasted memory; MVT-like schemes were vulnerable to fragmentation and app programmers could do nothing about this.
    • Re-locating loaders (PDP-10 and System/360) slightly mitigated these difficulties. (Need to be careful here!)
    • Pre-vm time-sharing used swapping (whole users / apps).
    • Atlas and Fritz-Rudolf Güntsch pioneered vm in academia.
    • Burroughs 5500 first commmercially available vm machine.
    • Any other implementations before CP-67.
    • IBM's Sayre found that automated vm (Blaauw's DAT box plus a paging supervisor) out-performed programmer-calculated overlay designs (ref to CP-67). So IBM committed itself to vm, although initially System/370 lacked both hardware and software to support it.
    • Minicomputers - NORD and VAX.

The language used will be simpler, but the content will not be dumbed down. You may notice that there are a few new technical topics (e.g. "I/O" in "V=R mode" refers to the fact that IBM channels (I/O processors) could not access the DAT box).

I don't see the relevance of the stuff about segmentation in Multics and 80286 in the currnent history section? I admit I don't really understand it as I learned about vm on IBM mainframes, where the implementation is almost always transparent to apps. Can anyone clarify its meaning and relevance?

Other refs: (must google "virtual memory sayre" for them again - lost them due to finger trouble) Philcha 11:27, 22 October 2007 (UTC)

Generally agree on the rewrite. Fallacies I see in your plan: multiple virtual address spaces introduced too soon; you did not include initial segments vs pages struggle; please do not remove x86 material, just move it down to further subsection (most readers will only come here to look for it). I don't like "history as table" idea. Multics stuff I see as interesting semi-academic example of a more (maybe: too much) advanced virtualization model. Despite its failure, is a notable subject - a starting ground for Unix. The meaning is pretty clearly stated on this talk page and in the article. --Kubanczyk 12:20, 22 October 2007 (UTC)
Re multiple virtual address spaces, it delivers a security benefit, and I'd like to present all the main benefits of virtual memory together and early; but it depends on how long the explanation turns out to be. I'd play it by ear, and you may be right.
Why is initial segments vs pages struggle relevant? And where? If it's relevant, I'd suggest late in the "how it works" section: not earlier, as it complicates the explanation of the concept; not in the history, because it would make one item in the history much longer and more complex than the rest.
Re x86, how many other architecture-specific implementations should the article include? If none, why should x86 be privileged? I wouldn't expect would-be writers of OS code for x86 to rely on Wikipedia.
What you said about Multics suggests it belongs more in a Unix-related article rather than a general article about virtual memory. Did Multics also contribute to the development of virtual memory concepts and /or technologies?
Why don't you like the idea of history as a table with dates in left column? Philcha 12:45, 22 October 2007 (UTC)
Multi: "Benefits - version for dummies" are clearly stated in two sentences in lead section. The security benefit is so subtle one, that to explain it you have to be really in-depth about how exactly vm works. Btw security is really a side effect here, achievable by much simpler mechanisms than multiple address spaces.
Segments: Should be briefly mentioned when introducing pages. Notable enough (1) because of x86 (2) because of B5000 (3) to show that page allocation or page table is not needed for VM.
x86: Yes, there is this one implementation that most people will expect to have its own section here: x86. I gave you the reason already.
Multics: (1) No, total misunderstanding. (2) Yes it did in an interesting way, although it failed. Isn't it obvious from the article itself?
Table: becouse history is supposed to be fascinating story, not a dull table.
Generally: End of this thread from me. Please, proceed as you wish. Most of the content needs a rewrite anyway.
--Kubanczyk 17:12, 22 October 2007 (UTC)

Note: I have moved the rewrite draft to Virtual memory/rewrite. If you cannot rewrite the current article incrementally then please keep it separate until it's ready to replace the current. -- intgr [talk] 15:22, 23 October 2007 (UTC)

I've rewritten all except the "History" section, which can be done piecemeal simce it's chronological. I hope I've achieved the objectives I set: simpler language (and diagram); readers need less prior knowledge; more general, not so focussed on a specific architecture. Since my background is IBM mainframes, the description of the principles is based on System/370, although I've tried to make it as general as possible and have included the previous version's description of the x86 implementation. Please add any others that are sufficiently distinctive and important. Citations would also be useful. If there are a lot of additions,we may have to consider splitting the article. Philcha 14:38, 24 October 2007 (UTC)

People have edited virtual memory after the most recent change to virtual memory/rewrite; at what point should the rewrite be considered ready to replace the existing page, so that we don't have two articles going in different directions? Guy Harris 05:01, 13 November 2007 (UTC)
It appears that's already been done. Should we request that virtual memory/rewrite be deleted? Guy Harris 05:08, 13 November 2007 (UTC)
Probably. --Kubanczyk 14:13, 13 November 2007 (UTC)

Definition of "paging"

I haven't read the whole rewrite yet, but I would like to clarify this quote: "Paging is the process of saving inactive virtual memory pages to disk and restoring them to real memory when required."

First, as far as I can tell, it should say "physical memory" instead of "virtual memory pages" — the virtual mappings remain in place, but the physical page referred by this mapping is moved.

I agree you can look at it either way, but I prefer to stick with saving / restoring "virtual memory pages" because virtual memory pages are what all parts of the computer system (hardware and software) are interested in / aware of, except for the dynamic address translation hardware and the paging supervisor.Philcha 01:46, 12 November 2007 (UTC)

But more importantly, we haven't been able to agree on a clear definition for the word "paging" actually means on Talk:Paging, and thus I find the paging article more confusing than clarifying. A definition that people can agree on is critical for discussion. A quick Google search I made yielded two explanations that were specific enough and made sense to me (from some ancient OS-specific glossaries, they wouldn't make good sources):

  • In computer architecture, a technique for implementing virtual memory, where the virtual address space is divided into fixed-sized blocks called pages, each of which can be mapped onto any physical addresses available on the system. [6]
  • In operating systems, the act of managing disk-backed data in main memory. The terms "paging in" and "paging out" respectively refer to loading and dropping of disk-backed pages from main memory [7] (usually the page cache, but also applied to swap space — I am not sure whether this is correct usage or not).

As far as I can tell, both of these definitions are used in practice. Do you have an idea of how to disambiguate these meanings from the word "paging"? Does the second definition of paging also include all aspects of "swapping" of dynamic application memory? [I used to think not, but I am not sure anymore.]

I haven't found any better sources than the above this far, but to be honest I haven't been looking either, which is why I haven't responded on the paging discussion. -- intgr [talk] 18:11, 24 October 2007 (UTC)

Section "Segmented virtual memory" redundant?

As far as I can see the only things this section adds are: in segmented systems the virtual memory management tables describe variable-length sections of virtual memory; the address translation hardware checks whether all of the range of virtual memory required by the current instruction is in the segment / page (a complication which I deliberately left out of the description of paging, and which is very implementation dependent - for example IBM System/370 and successors have instructions which handle very large blocks of data, up to a few gigabytes at a time, and these are interruptible; so if the data spans across pages and a required page is not in real memory, the DAT hardware raises an interrupt, the paging supervisor retrieves the required page and then the "long" instruction resumes where it left off).

Perhaps we need 2 sections: an overview of the concepts and basic mechanisms in language which is as implementation-neutral as possible (quite difficult); and a separate "implementations" section which deacribes how specific systems implement virtual memory.Philcha 12:19, 14 November 2007 (UTC)

No, this section just requires some work. It is definately not redundant. Segmentation (in Multics sense) greatly expands the idea of virtual memory, although this found little practical use (currently only iSeries). The key ideas, that you don't mention by the way, are: "memory segment is a synonym of disk file" and "no linker needed". --Kubanczyk 12:36, 14 November 2007 (UTC)
And, furthermore, there are systems that do segmentation but don't do paging (e.g., the Burroughs systems); one reason I added the section was to make it clear that you can have virtual memory without paging. Guy Harris 19:24, 14 November 2007 (UTC)
I don't think the Burroughs systems had "memory segment is a synonym of disk file" semantics; that's easier with systems that do paging, as you don't have to be able to fit the entire file into main memory to access it. Guy Harris 19:26, 14 November 2007 (UTC)
The idea probably originated in Multics. --Kubanczyk 13:23, 15 November 2007 (UTC)
And the segmentation checks only check whether all data used by the instruction is within the segment; it doesn't check whether all of it is resident, at least on the Burroughs machines (where there is no paging, so either the entire segment is present in memory or it's not) and, as far as I know, on the GE-645 and Honeywell 6180 (where the instructions were continuable - on the GE-645, at least, a page fault dumped a huge pile of internal state information on the stack, and returning from the page fault reloaded that pile, so if you got a page fault in the middle of an instruction, the instruction would be continued after the page fault was serviced). Guy Harris 19:49, 14 November 2007 (UTC)
The last few posts in this thread mention details of 3 different segmentation implementations (Multics, Burroughs, GE). This makes me feel more strongly that the article should start with "concepts" sections for the general reader followed by "implementation details" sections for the more technically inclined.Philcha (talk) 16:27, 17 November 2007 (UTC)
GE, and later Honeywell = Multics; Multics ran on the GE-645, Honeywell 6180, and later machines. The main distinction above is between segmentation without paging (Burroughs large systems) and segmentation with paging (Multics). Guy Harris (talk) 20:09, 17 November 2007 (UTC)

Virtual=real operation

What exactly does "virtual=real mode" refer to? Does it refer to a mode wherein a given virtual address is always resident in memory at a real address equal to the virtual address? If so, then, whilst it might be true that interrupt routines and the paging supervisor run in that mode in MVS and its successors, it's not necessarily true in other OSes. That code would probably be in wired-down memory (i.e., the pages are always resident), but there's no guarantee that the virtual address is equal to the physical address.

The part that talks about application programs seems to suggest that "virtual=real mode" only means "runs with pages wired down", not "runs with pages wired at virtual addresses equal to their real addresses". Guy Harris (talk) 03:50, 17 November 2007 (UTC)

Yes, it seems about 0% of this paragraph is generally true in the modern OSes. It uses very vague terms "real mode" and "fixed location", that I don't really understand. If it pertains to MVS line, let's move it "as is" to a more appropriate article—the perfect one would be probably "Evolution from MFT to MVT", but it does not exist yet :))
I think a better paragraph would be something along the lines of "Virtual addressing inside a kernel". --Kubanczyk (talk) 14:10, 17 November 2007 (UTC)
By "virtual=real mode" I did mean a given virtual address is always resident in memory at a real address equal to the virtual address.
I agree that "pages wired down" is a broader concept of which "virtual=real mode" is a special case. IBM mainframes do not distinguish between them, because internally the Program Status Word (program counter plus a lot of control switches) contains a "DAT on / off" bit switch.
In principle timing-dependent programs and applications which handle I/O at a low level could run as "pages wired down", but any parts of memory accessed by I/O controllers such as IBM channels have to be fully V=R because these devices do not have dynamic address translation.
In principle interrupt handlers could run in "pages wired down".
I'm not happy with the alternative title "Virtual addressing inside a kernel" because: some apps need to run V=R or at least with "pages wired down"; "kernel" may puzzle general readers.
I think again we're facing the problem of dealing with more than 1 audience. Perhaps we need an "advanced" section at the end which deals with details of different implementations. Then the "for general readers" sections can point that some applications and large parts of the OS cannot run in full virtual memory mode and refer to a sub-section of the "advanced" section.Philcha (talk) 16:16, 17 November 2007 (UTC)
FYI, I/O controllers on most of the systems are not DAT-capable, but it does not in any way imply that they need V=R. Normally they just depend on OS to calculate real address for them, and use "wired-down pages". So, there is no real reason left for V=R, generally, and I don't really see why MVS needs it. I think the V=R information needs a lot more context, because now it is plainly confusing. --Kubanczyk (talk) 19:13, 17 November 2007 (UTC)
V=R mode sounds as if it's specific to MVS and maybe VM - which means that IBM mainframes probably do distinguish between them if they're not running either of those; Linux, for example, has no notion of V=R mode, and probably leaves DAT on in almost all of the kernel, including the paging code and most if not all of the interrupt-handling code path. As far as I know, interrupt handlers on Unix-like systems and Windows run with pages wired down but with the paging hardware turned on.
As Kubanczyk noted, to handle I/O, you need to wire down pages on which I/O is taking place, get the physical addresses of the pages (the physical addresses won't change as long as the pages are wired), and supply those physical addresses to the devices. If I/O has to be done to a physically contiguous region of memory - i.e., if scatter/gather I/O isn't supported and you don't have an IOMMU - and the I/O can't be done as multiple operations (disk I/O can, at some performance cost, but tape I/O probably can't) - you might have to ensure the physical contiguity of virtually-contiguous pages, and one way to do that might be to use V=R.
The part of the audience that would be confused by the term "kernel" would probably also be confused by page tables - a very high-level description that just mentions pages and segments would probably be sufficient. Virtual memory is a technical subject, so there are limits on what can be done for a non-technical audience. Guy Harris (talk) 20:26, 17 November 2007 (UTC)
"you might have to ensure the physical contiguity of virtually-contiguous pages, and one way to do that might be to use V=R"... Actually I still don't see how V=R will help you in any way with allocating a contiguous physical buffer? V=R is not a memory allocation method, is it? You have to ask the OS: "give me a contiguous memory for I/O", whether you address it later in V, R, or V=R.
I suspect V=R is needed for MVS programs that write their own CCWs and include buffer addresses in CCWs, while conveniently forgetting about DAT... Pure speculation. Come to think of it, V=R can slightly impair such contiguous buffer allocation, because a possible V vs R clash. Such situation would be easily avoidable with V addressing. --Kubanczyk (talk) 23:09, 17 November 2007 (UTC)
I think the last few posts confirm my suggestion that the simpler parts of the article should simply point out that spme areas used by the OS, including page tables and I/O buffers, cannot be fully pageable - buffers because channels / whatever cannot access the DAT box; both because they would lead to paging recursions. As far as I can see in practice page tables and first-level interrupt handlers will wind up permanently wired down, as having them paged out would cause chaos.
Kubanczyk is right about full V=R being needed for programs that "dynamically modify channel programs" (IBM's official phrasing), among other reasons because Get Real Address is a privileged instruction which apps can't use and apps therefore do not know what real addresses to plug into CCWs.
I like Kubanczyk's section title "Permanently resident pages" and most of the text under that heading. I've found z/OS Basic Skills Information Center: z/OS Concepts which explains that z/OS: has 3 modes, V=V, V=R and V=F (virtual = fixed, i.e. "wired down" and with DAT on), although the doc does not explain exactly where V=R and V=F are used; supports both paging (4KB pages) and what it calls "segmentation" (variable size, in principle up to exabytes). The description I originally gave was accurate for MVS (original version). It's clear that MVS' successors have gradually extended virtualization in several directions, and I don't think Virtual memory should go into that much detail. I therefore suggest Kubanczyk's "V=R" section should be replaced by something like: "Some early virtual memory operating systems, such as late-1970s MVS, had no means of fixing the real addresseses of pages which were subject to DAT, and therefore made some parts of the operating system V=R: .... Under these systems some applications also had to run V=R, notably ... timing-dependent ... took control of I/O at a very low level." Philcha (talk) 10:48, 26 November 2007 (UTC)
Guy Harris' "The part of the audience that would be confused by the term 'kernel' would probably also be confused by page tables - a very high-level description that just mentions pages and segments would probably be sufficient" looks wrong to me, because a reader who is intelligent but not familiar with virtual memory systems would then ask how the DAT box finds the corresponding real memory areas, how the system knows when a block of virtual addresses has been paged out and how it handles that situation. Philcha (talk) 20:54, 25 November 2007 (UTC)
I suspect "a reader who is intelligent but not familiar with virtual memory systems" and capable of then "[asking] how the DAT box finds the corresponding real memory areas, how the system knows when a block of virtual addresses has been paged out and how it handles that situation" would not be fazed by the concept of an OS kernel, even if the term might not be familiar (the impression I have is that the supervisor-mode portion of OS/360 was called either the "control program" or the "nucleus" - was the nucleus the core part of the supervisor-mode portion, with what might be now called "loadable modules" implementing, for example, some system calls in the control program?). Guy Harris (talk) 23:28, 25 November 2007 (UTC)
I agree that "nucleus" was used to describe the core part of the System/370 OSs, but I don't remember anyone ever defining nucleus at all precisely. Philcha (talk) 10:48, 26 November 2007 (UTC)
Btw, nucleus is defined pretty precisely in the "OS/360 introduction" pdf from bitsavers. IIRC it seemed like a synonym of "kernel". --Kubanczyk (talk) 11:07, 26 November 2007 (UTC)

Removing poorly-written inaccurate comment

Someone's just added to the intro:

The earlier paras in the intro deliberately point out that paging / swapping is not part of the definition of virtual memory. I'm removing the erroneous insertion. Philcha (talk) 11:29, 31 December 2007 (UTC)


Archive 1