HeyBub schreef:
> Leythos wrote:
>> In article <#1wndj28KHA.3176@TK2MSFTNGP05.phx.gbl>, heybub@gmail.com
>> says...
>>> Lisa wrote:
>>>> I was told by a computer repairman that it's not necessary to defrag
>>>> my laptop. If the hard drive gets full, remove files and always
>>>> make sure I'm using a virus protection.
>>>> What are your thoughts?
>>> I can envision a situation in a data center with hundreds of
>>> thousands of transactions per minute where defragging may be of some
>>> slight benefit (assuming an NTFS file system).
>>>
>>> I can also imagine a user devoted to daily defragging experiencing a
>>> power interruption during a critical directory manipulation process.
>> On a small computer with many add/delete/grow/shrink operations,
>> defrag can significantly impact file access times and can be very
>> noticeable to users if their system was badly file fragmented before
>> the defrag.
>>
>> White-Space fragmention is not normally an issue, but a file that is
>> fragmented into 8000 parts will have an impact on system performance.
>>
>> This argument has gone on for decades, but it's the people that
>> maintain systems across many areas that know the benefits of defrag.
>
> Ignorance can be fixed - hence the original question. It's knowing something
> that is false that's the bigger problem.
>
> Considering your example of 8,000 segments, consider: A minimum segment size
> of 4096 bytes implies a minimum of 32 meg file. A FAT-32 system requires a
> minimum of 16,000 head movements to gather all the pieces. In this case,
> with an average access time of 12msec, you'll spend over six minutes just
> moving the head around. Factor in rotational delay to bring the track marker
> under the head, then rotational delay to find the sector, and so on, you're
> up to ten minutes or so to read the file.
>
> An NTFS system will suck up the file with ONE head movement. You still have
> the rotational delays and so forth, but NTFS will cut the six minutes off
> the slurp-up time.
Hi Heybub,
This is the second time I hear you claiming this.
How do you 'envision' the head(s) reading all fragments in one go?
In your example: 8000 fragments. If these are scattered all over the
place, the head has to read a lot of different places before all info is in.
Compare this to one continuous sequential set of data where the head
reads all without extra seeking and/or skipping parts.
Also, and especially on systems that need a huge swapfile, after filling
up your HD a few times can lead to heavily fragmented swapfile. This
gives a performance penalty.
I have seen serious performance improvements (on both FAT32 and NTFS)
after defragging (also the systemfiles with
http://technet.microsoft.com/en-us/sysinternals/bb897426.aspx)
Others claim the same. How do you explain that?
Erwin Moller
>
> De-fragging an NTFS system DOES have its uses: For those who dust the inside
> covers of the books on their shelves and weekly scour the inside of the
> toilet water tank, a sense of satisfaction infuses their very being after a
> successful operation.
>
> I personally think Prozac is cheaper, but to each his own.
>
>
--
"There are two ways of constructing a software design: One way is to
make it so simple that there are obviously no deficiencies, and the
other way is to make it so complicated that there are no obvious
deficiencies. The first method is far more difficult."
-- C.A.R. Hoare