# Defragging



## Hotscot (Sep 25, 2008)

Since the DVR is basically a computer. Is there any function for defragging the drive?

_(I wonder if there's housekeeping functionality doing that in the background)_

And if not could that relate to problems like pixelation and synch for example?

_(Unless there's an inbuilt function that always writes data sequentially.)_

If no-one has done it before maybe it would be interesting analyzing the fragmentation on an attached Sata drive.

_(I don't know if your PC recognises the Sata drive after use with the DVR.)_


----------



## cody21 (Sep 26, 2007)

I believe it is based on LINUX (Unix) and as such, there is no 'defrag' required. Someone posted something here about VERY LARGE blocks (GB?) being used for recording & buffering functions. The pixelation ans lip-sync issues have been going on for a couple of years and seemed to actually worsen a year ago with the introduction of MPEG4 technology. Yes, those issues suck. Wouldn't it be nice if the engineers could at least get those fixed before adding more functionality.


----------



## mtnsackett (Aug 22, 2007)

actauly there is a defrag in the background, it was added with the powerdown feature when you havent used your dvr for a while


----------



## TomCat (Aug 31, 2002)

mtnsackett said:


> actauly there is a defrag in the background, it was added with the powerdown feature when you havent used your dvr for a while


Source, please.


----------



## davring (Jan 13, 2007)

mtnsackett said:


> actauly there is a defrag in the background, it was added with the powerdown feature when you havent used your dvr for a while


I would like to find out more about your statement as well. As TomCat says. source?


----------



## n3ntj (Dec 18, 2006)

What if you use our DVR regularly? Does the OS run the defrag function regularly (monthly, for example)?


----------



## Steve (Aug 22, 2006)

FYI. We had a discussion on this topic here earlier this year: http://www.dbstalk.com/showthread.php?t=125486&highlight=defrag

No one mentioned background defragging at that time, IIRC, but perhaps it was recently added as a result of our discussion?

/steve


----------



## rahlquist (Jul 24, 2007)

I highly doubt we will ever get a straight answer on this one from D* but I would be willing to bet that we dont need to worry about this and here is why. I have seen it commented that there are 3 partitions involved in this DVR. Assuming System, D* Use, and Recordings for partitions seems logical to me. System would be where the OS is stored. D* Use would be where the database for the guide is stored along with the Movie Now type data, and system logs and possibly the Live Buffer, this would allow D* to grow the DB as needed and not ever collide with user space. Lastly would be your recorded programming, this would be the one you'd most likely worry about defragging but I dont think its needed. 

Say your average shows are 30, 60 and 120 min. If D* is on the ball these shows should be roughly the same size for each of the 3 time frames, i.e. every 30 min show should be withing a certain size range (and that size range would tighten if D* uses CBR). So if you regularly record a 30 min show, then you can bet when you delete it, the hole created should be a usable size for the next 30 min show, which the FS should drop right in the hole. Any modern Linux fs from kernel 2.4.x and beyond should be smart enough to not dump a 1 hour show half into a 20 min block and half into other free space. 

My speculation is the slow guide is due to an overabundance of data, whether its because the combination of the increased data we carry now in the guide, or if its DB maint running we arent aware of (indexing, purges, etc) add to that people who fill their DVR to the brim.. 

Having too much data is where an extX based FS can break down. If you're regularly filling the drive then you could run into heavy thrashing. If D* is keeping its D* Use partition full to the brim, it could thrash too. Also as has been said we have no idea what background processes run so at any time we could run afoul of on any given occasion.


----------



## armophob (Nov 13, 2006)

From everything I have read the Linux operating system does not require it. I think we have gotten so used to windows that we forget there can be maintenance free operation. I am pretty sure a "reset everything" command does write over the hard drive and would be the equivalent of a defrag if you really wanted to be sure.


----------



## rudeney (May 28, 2007)

The need for defragmenting is not a function of the O/S, but instead a function of the filesystem. Linux or any O/S running on FAT or NTFS will eventually need defragmenting to maintain performance. Any O/S running on ReiserFS or ext3 will not gain *as much* from defragmenting. That’s not to say it never needs it, just that it does not take as much of a performance hit as FATR, NTFS and other filesystems.


----------



## Steve (Aug 22, 2006)

Here's ar some quoted sections from a relevant Wikipedia article, not the most reliable source of information, I'll admit. Bolding is mine:

http://en.wikipedia.org/wiki/Ext3

"*There is no online ext3 defragmentation tool working on the filesystem level.* An offline ext2 defragmenter, e2defrag, exists but requires that the ext3 filesystem be converted back to ext2 first. But depending on the feature bits turned on the filesystem, e2defrag may destroy data; it does not know how to treat many of the newer ext3 features.[7]

[...]

That being said, as the Linux System Administrator Guide states, "Modern Linux filesystem(s) keep fragmentation at a minimum by keeping all blocks in a file close together, even if they can't be stored in consecutive sectors. Some filesystems, like ext3, effectively allocate the free block that is nearest to other blocks in a file. Therefore it is not necessary to worry about fragmentation in a Linux system."[11]

*Irrespective of the above (subjective) statement, file fragmentation can be an important issue in server environments such as in multi-media server applications. *While it is true that ext3 is more resistant to file fragmentation than either NTFS or FAT filesystems, *nonetheless ext3 filesystems can and do get fragmented over time. Consequently the successor to the ext3 filesystem, ext4, includes a filesystem defragmentation utility and support for extents (contiguous file regions)*."

/steve


----------



## veryoldschool (Dec 10, 2006)

rahlquist said:


> I highly doubt we will ever get a straight answer on this one from D* but I would be willing to bet that we dont need to worry about this and here is why. I have seen it commented that there are 3 partitions involved in this DVR. Assuming System, D* Use, and Recordings for partitions seems logical to me. System would be where the OS is stored. D* Use would be where the database for the guide is stored along with the Movie Now type data, and system logs and possibly the Live Buffer, this would allow D* to grow the DB as needed and not ever collide with user space. Lastly would be your recorded programming, this would be the one you'd most likely worry about defragging but I dont think its needed.
> 
> Say your average shows are 30, 60 and 120 min. If D* is on the ball these shows should be roughly the same size for each of the 3 time frames, i.e. every 30 min show should be withing a certain size range (and that size range would tighten if D* uses CBR). So if you regularly record a 30 min show, then you can bet when you delete it, the hole created should be a usable size for the next 30 min show, which the FS should drop right in the hole. Any modern Linux fs from kernel 2.4.x and beyond should be smart enough to not dump a 1 hour show half into a 20 min block and half into other free space.
> 
> ...


OS isn't on the drive [it's in chips].
Swap file partition,
DirecTV showcase partition
"Our" recordings partition


----------



## flipptyfloppity (Aug 20, 2007)

Linux needs defragging as much as any other filesystem (like NTFS) needs it. Which is to say it doesn't really need it.

To be honest, with your PVR recording two streams and once and playing another back, the drive head is seeking several times a second anyway. So it's already operating as if it were fragmented, so actual fragmentation may make no difference at all.


----------



## Stuart Sweet (Jun 19, 2006)

Agreed. Talk of defragmentation probably is best left in the 20th century.


----------



## rudeney (May 28, 2007)

flipptyfloppity said:


> To be honest, with your PVR recording two streams and once and playing another back, the drive head is seeking several times a second anyway. So it's already operating as if it were fragmented, so actual fragmentation may make no difference at all.


The OS could be using an "extent" algorithm that pre-allocates disk space in 30 minute or 1-hour chunks (based on the bitrate of the signal being recorded) to help minimize fragmentation. Like you say, though, it may be a moot point as the disk heads would still need to move around to seek from three (or possibly four with DIRECTV2PC or future MRV) physical disk sectors.


----------



## rahlquist (Jul 24, 2007)

veryoldschool said:


> OS isn't on the drive [it's in chips].
> Swap file partition,
> DirecTV showcase partition
> "Our" recordings partition


Thanks VOS swap partition is something I only think of when installing fresh.

I thought I read somewhere if a clean drive is put in the HR series the OS is copied off the chips and to the drive?


----------



## Steve (Aug 22, 2006)

Stuart Sweet said:


> Agreed. Talk of defragmentation probably is best left in the 20th century.


Respectfully disagree, especially with FAT and NTFS systems. Defragging any flavor of Windows is the easiest way to see a marked performance improvement. I've been running _Diskeeper _for the past year on my 3 Windows desktops and they all fly, compared to prior performance.

Congrats on 18k, BTW! /steve


----------



## Steve (Aug 22, 2006)

flipptyfloppity said:


> To be honest, with your PVR recording two streams and once and playing another back, the drive head is seeking several times a second anyway. So it's already operating as if it were fragmented, so actual fragmentation may make no difference at all.





rudeney said:


> Like you say, though, it may be a moot point as the disk heads would still need to move around to seek from three (or possibly four with DIRECTV2PC or future MRV) physical disk sectors.


All the more reason to keep fragmentation to a minimum. Think about the head activity needed to seek from 3 contiguous files vs. needing to seek from 3 fragmented files. It could increase exponentially. /steve


----------



## P Smith (Jul 25, 2002)

veryoldschool said:


> OS isn't on the drive [it's in chips].
> Swap file partition,
> DirecTV showcase partition
> "Our" recordings partition


We dissect HR20 disk more then year ago; there is no such "DirecTV showcase partition".

How come "Swap file partiton" here if you did state recently "The OS is Linux and I doubt it even uses the disk for virtual memory." ?

"Our" recordings partition - what about current channel buffer ?

Second partition's purpose you could deduct from a list of folder on it here.


----------



## rudeney (May 28, 2007)

Steve said:


> All the more reason to keep fragmentation to a minimum. Think about the head activity needed to seek from 3 contiguous files vs. needing to seek from 3 fragmented files. It could increase exponentially. /steve


Actually, FlipFlop's point is that having three or four data streams being simultaneously read/written means that no matter how well defragmented those individual streams are, there's going to be a lot of head seek latency. Given the average seek time of modern drives beign in the sub-8ms range, unless these multiple streams were stategically arranged for this multiple read/write scenario (which is not going to be the case), fragmentation or the lack therof is a moot issue on the DVR's.


----------



## flipptyfloppity (Aug 20, 2007)

Steve said:


> All the more reason to keep fragmentation to a minimum. Think about the head activity needed to seek from 3 contiguous files vs. needing to seek from 3 fragmented files. It could increase exponentially. /steve


rudeney kinda covered it already, but I'm gonna cover it again anyway.

The value of not being fragmented is that if you make sequential reads, these reads can be serviced quickly, as the head doesn't have to be moved to new spots to continue the read.

But the PVR doesn't make sequential reads, it writes a bit to stream 1, then writes a bit to stream 2, then reads from a third stream. It does this repeatedly circularly. It has to seek for every access anyway, so it never operates in the "fast" case anyway, even if operating on non-fragmented streams/files.

For media content, the box most likely operates by allocating large chunks consecutively as mentioned by rudeney. The means that any file cannot be in smaller fragments than perhaps 30 seconds or so. So that means your drive could do an extra 6 seeks a minute, which is peanuts, given they take 1/50th of a second each.


----------



## TomCat (Aug 31, 2002)

rudeney said:


> ...Linux or any O/S running on FAT or NTFS will eventually need defragmenting to maintain performance...


There seems to be no evidence to support this. I can guarantee you that a sluggish guide is not due to fragmentation. I don't care how fragmented a HDD might become, it doesn't take more than a few added milliseconds to access data that is fragmented to the furthest reaches of a modern HDD. If you see your GUI pause for two seconds when it normally doesn't, that is not because it takes that long for the hardware to access fragmented files, it is for another reason, usually because it is busy doing other, more processor-intensive, things. And that is normal. That's what you want it to do--pause the less-important tasks rather than skunk a recording. Fragmentation is just not the issue.

To "maintain performance" can only imply that fragmentation will mean performance will take a hit. But a hit so mild as to not be even noticeable. Thats if DVRs fragment, which apparently they don't.

The real problem with fragmentation is not the performance hit, it is the increased potential for extents tree and other cataloging to lose track of the various fragments. Quite obviously the risk of losing track of one coherent file is much less than the risk of losing track of any one fragment of a file that is fragmented scores of times. Defragging should be done not to increase performance (which it can only do marginally) but to increase reliability. There really is no practical reason to defrag a consumer DVR.



Steve said:


> ...as the Linux System Administrator Guide states, "Modern Linux filesystem(s) keep fragmentation at a minimum by keeping all blocks in a file close together, even if they can't be stored in consecutive sectors. Some filesystems, like ext3, effectively allocate the free block that is nearest to other blocks in a file. Therefore it is not necessary to worry about fragmentation in a Linux system."...Irrespective of the above (subjective) statement, file fragmentation can be an important issue in server environments such as in multi-media server applications...


Professionally, media is typically stored on RAID or similar systems, which means the data is "fragmented" by definition, on purpose. Speaking from years of experience as an administrator of many hardware systems hosting various "multi-media server applications" I find no evidence to support the statement "file-fragmentation can be important". Maybe it can be, but in my experience, it never once has.


----------



## P Smith (Jul 25, 2002)

TomCat said:


> ... Speaking from years of experience as an administrator of many hardware systems hosting various "multi-media server applications" I find no evidence to support the statement "file-fragmentation can be important". Maybe it can be, but in my experience, it never once has.


Perhaps you could listen other server admins who handle other type of file storage, for example Mechanical Desktop with huge assemblies - 10,000s files a few GB total.


----------



## TomCat (Aug 31, 2002)

P Smith said:


> Perhaps you could listen other server admins who handle other type of file storage, for example Mechanical Desktop with huge assemblies - 10,000s files a few GB total.


Well, that is somewhat out of the category of media files. I imagine for some applications, defragging can indeed be important. I just don't see it for media files or performance regarding them. 10.000s of files? Heck, my PowerBook alone probably has 300,000 files on it, or so it claims during backups.


----------



## P Smith (Jul 25, 2002)

While I'm agree this case was not in MM arena, your backup (slow sequential mono process on local PC ) is totally different from loading those assemblies each morning by group of mechanical/electrical/etc designers.
As to number of files on those servers during backups - hundred millions on a few SDLT tapes in multi-DLT drives libraries.


----------

