OESF Portables Forum

Everything Else => Sharp Zaurus => Model Specific Forums => Distros, Development, and Model Specific Forums => Archived Forums => C1000/3x00 Hardware => Topic started by: neuroshock on December 17, 2005, 01:15:16 pm

Title: Nevermind. It's Been Fixed.
Post by: neuroshock on December 17, 2005, 01:15:16 pm
Hey Folks,

I'm pretty frustrated- frustrated enough to offer to some genius that solves my problem here some hard cold cash payable in USD via PayPal if they can deliver.

Here's what I'm trying to accomplish:

I'm a C3100 owner and would like to upgrade the 4gb internal Hitachi Microdrive with a new 6gb Retail version Hitachi Microdrive. There are no compatibility problems with the 6gb drive it is recognized fine in either the interior or exterior CF drive bay and I can rudimentarily move data to and from it with no problems.

Just to be painfully clear up front I want the finished product to work with Cacko 1.23 Heavy.

Apparently the C3100's Microdrive comes fdisked from the factory into 3 partitions:

1.) The first is a 9mb partition that is apparently empty, Meanie's site says that the C3000 uses this for a copy of the system files but that it is unused on the C3100. It is formatted with ext3.

2.) The second is another 9mb partition that is also apparently empty, Meanie's site says that the C3000 uses this partition also but that it appears to be unused on the C3100. It is also formatted with ext3.

3.) The third partition takes up the entire rest of the disk and is formatted with FAT 32. The C3100 has this partition formatted to FAT 32 to provide the user access to it for storage reasons from within the C3100's OS environment as well as providing the ability for the C3100 to be recognized as a USB storage device by a Windows 2000/Xp machine.

The Fdisk partitioning table for the stock 4gb Hitachi Microdrive is as follows:

Device Start End Blocks ID System
/dev/hda1 1 20 10048+ 83 Linux
/dev/hda2 21 40 10080 83 Linux
/dev/hda3 41 7936 3979584 c Win95 FAT32 (LBA)

After switching out the HDD’s what I wish to end up with is a partitioning scheme that includes the following:
-Complete compatibility with Cacko 1.23
-A 512mb Swap Partition (NOT a swap FILE!)
-A 2gb partition formatted with ext3 that is visible from within the Cacko 1.23 environment – It must be possible to install programs onto this drive!
-A 3gb FAT32 partition for hdd3/hda3 so that I can still use the Z as a USB storage device under Windows 2000.

My competency level is sufficient in hardware concerns to be able to handle the actual physical switching of the hard drives, (I’ve already done so several times.)
My competency level in Linux in general isn’t nearly as good. (If it were you wouldn’t be reading this post and taking my money.) =) But I can handle some command line duties (cp, ls, mkdir, etc.) as well as simple file editing tasks. I’m also very competent within Fdisk itself. Beyond the basics I’m asking for complete hand-holding and that’s what I’ll be paying for.

I want someone who has definite knowledge of the C3100 and it’s specific hdd issues - You will need to give me step by step, detailed instructions for me to follow in order to be able to accomplish what I’ve listed above. This includes EVERYTHING, do not assume that I know anything, (you’ll be disappointed if you do.) This will need to include every step from Fdisking to formatting to editing of the scripts that the C3100 uses to mount it’s internal drives on boot.

I know you’ll pm me anyway but for all those who are going to send me a PM excitedly telling me about the How-To that’s available in the Zaurus Wikipedia that covers this EXACT SUBJECT and that if I go read it I can save my money etc. etc. etc. I’ve read it. It doesn’t help me- enough. Plus my issues are different from his scenario in several ways. Besides that particular how-to is a bit deceptive in what the author actually offers. (Don’t get me wrong- GREAT info it just doesn’t stick to the labeled topic- which is how to convert the FAT32 partition to ext2/3.) The first section he explains rather well how to repartition and format the 4gb internal hard-drive from FAT32 to ext2/3. However when he continues on he offers the reader his own edited scripts for the reader to compare his/her own to and modify accordingly. However if you read carefully it turns out that the partitioning scheme that he is using is an ext3/ext3/FAT32/ext2. Therefore they won’t work specifically for the user who is simply trying to convert his ext3/ext3/FAT32 drive to a ext3/ext3/ext3 partitioned drive (as the title of the listing would indicate.) So it leaves the reader to then have to extrapolate what they need/don’t need from the scripts to make it work for them and after looking into the scripts he listed I knew it was beyond me right away.

Possibly the fatal difficulty here will lie in the partitioning scheme fundamentals. If for reasons that I don’t understand the C3100 MUST keep the hda1 (9mb ext3 partition), and hda2 (9mb ext3 partition), even though they are not technically “used” by the C3100 model then there will be no way to also add both a Swap Partition and a large ext3 drive upon which I can install programs and view the contents thereof. The simple reason being that you cannot have 5 primary partitions on one drive, the maximum is 4 I believe.

If this is confirmed to be impossible then I’ll have to decide which I want more- the Swap Drive or the FAT32/Windows USB drive capability. At that point I’ll make that decision and I’ll re-offer the reward for that configuration but I’m REALLY hoping that someone will find a way for me to have my cake and eat it too. As an alternative I could implement a loopback device to float my ext2 file system on a FAT32 drive but I REALLY want to avoid that performance hit!  - For the same reason I desperately do not want to have to settle for the performance of a swap file rather than a Swap Partition!

Regardless of all that I will post the exact procedure I followed from whoever claims the reward so that not only will that person get paid but the community will benefit from that knowledge in the future use as well.

The reward will start at $50usd. As those of you know who’ve claimed similar rewards from me in the past I tip very well also. If you feel the offer should be higher and you believe your skillz are up to the task feel free to suggest what you feel is appropriate. I certainly won’t mind you doing so- of course I may not be able to offer/afford that much but you can certainly run it by me. At any rate, my guess is that this will be easy pickin’s and someone will benefit from my ignorance very quickly. Not often do you find something like this that you can probably type up water-right instructions for in less than an hour or so. And you don’t even need to feel bad about taking advantage of me…. I WANT you to!

Likewise if someone wishes to rise to the occasion and provide this service but would rather not claim the reward for their personal use I will gladly donate it to the Zaurus developer of their choice instead.

If you need further information of any kind you have but to ask and I’ll make sure you get it as quickly as possible.

Please PM and/or Email me at your convenience.

Many Thanks,
-NeuroShock
dan@danzweb.com
Title: Nevermind. It's Been Fixed.
Post by: bam on December 17, 2005, 06:02:43 pm
ok, here we go,
the swap partition(does it have to be a primary partition or can it be an extended partition?
Answers:
1. primary only, I have seen fdisk only present 4 partitions(primary) max for a 4gb microdrive. so the answer for this one(swap partition) would be no.
2. can be extended partition, then this can be done.


actually I have done this with the 4gb drive I have hdd1-4, #4 being an ext2 partition.(the tutorial is on my site), although I dont run cacko(stock for now) but the idea should be about the same, actually I think I know someone wh has found this solution. I will check into it.
Title: Nevermind. It's Been Fixed.
Post by: frobnoid_ on December 17, 2005, 06:58:05 pm
Quote
After switching out the HDD’s what I wish to end up with is a partitioning scheme that includes the following:
-Complete compatibility with Cacko 1.23
-A 512mb Swap Partition (NOT a swap FILE!)
-A 2gb partition formatted with ext3 that is visible from within the Cacko 1.23 environment – It must be possible to install programs onto this drive!
-A 3gb FAT32 partition for hdd3/hda3 so that I can still use the Z as a USB storage device under Windows 2000.

The simple reason being that you cannot have 5 primary partitions on one drive, the maximum is 4 I believe.

but I REALLY want to avoid that performance hit!  - For the same reason I desperately do not want to have to settle for the performance of a swap file rather than a Swap Partition!

First off, let me throw out that I don't own a 3100 so I'm unable to guarantee this works perfectly. However, since I don't care about a reward, I don't feel bad giving you  a "This should work" solution.

Why do all five need to be primary partitions? Unless there's some really quirky zaurus-related reason, they don't need to be.

So, start a terminal and:
su
fdisk /dev/hda
d 3 #delete partition 3
n p 3 1 +3072M # create a 3GB partition #3
n e 4 (enter) (enter) # create a ~3GB extended partition #4
n l  (enter) +512M # create a 512MB partition #5
n l (enter) (enter) # create a 2.5GB partition #6
t 3 b # label partition #3 as FAT32
t 5 82 # label partition #5 as swap
t 6 83 # label partition #6 as Linux
w # write changes back to the disk

Now you've got your five partitions and are back at a command prompt.
mkswap /dev/hda5 # make partition #5 a swap partition
mke2fs -j /dev/hda6 # make partition #6 an ext3fs (-j makes it ext3...)
mkdosfs -F 32 /dev/hda3 # make partition #3 a FAT32 filesystem

Now you've got your filesystems created and just need to update /etc/fstab
load that up in your favorite editor (you need to be root to edit it, so do it from the command prompt, or run "chmod a+rwx /etc/fstab" first so user zaurus can edit it too...)

Change the fstab to add the following lines at the end:
/dev/hda5 (tab) none (tab) swap (tab) sw (tab) 0 0

You haven't specified where you want your new filesystems to be mounted.
Add the following line to your /etc/fstab, replacing /FOO with the mount point you'd like to reach the ext3fs at (/tmp for example [which is probably not a particularly good use of the space ])
/dev/hda6 (tab) /FOO (tab) ext3 (tab)  (tab) 0 0

Your new fat32 file system will continue to be mounted in the same place as the old one.

If you'd specify exactly where you want the filesystems, I'm happy to go into much more detail as to the appropriate lines to add to /etc/fstab.

Now reboot.
Title: Nevermind. It's Been Fixed.
Post by: frobnoid_ on December 17, 2005, 07:00:21 pm
Quote
So, start a terminal and:
su
fdisk /dev/hda
d 3 #delete partition 3
n p 3 1 +3072M # create a 3GB partition #3
n e 4 (enter) (enter) # create a ~3GB extended partition #4
n l  (enter) +512MB # create a 512MB partition #5
n l (enter) (enter) # create a 2.5GB partition #6
t 3 b # label partition #3 as FAT32
t 5 82 # label partition #5 as swap
t 6 83 # label partition #6 as Linux
w # write changes back to the disk

 I should note that these are made into lines for convenience of explanation, each (space) in these instructions is really hitting (enter) and the hash mark (#) and data following it are comments and shouldn't be typed in.
Title: Nevermind. It's Been Fixed.
Post by: neuroshock on December 17, 2005, 08:18:53 pm
First off- everyone that is PM'ing me PLEASE UNDERSTAND THIS:
I am NOT trying to modify my 4gb drive that came with my C3100! I am REPLACING that drive with a factory clean retail version 6gb Hitachi Microdrive!  YES I KNOW there are directions for altering the third partition of the original drive- they don't completely apply here.  That only affects a small facet of what I'm trying to do. In the document everyone keeps referring to ONLY ONE PARTITION IS ALTERED. What I'm attempting will require at LEAST THREE partitions to be altered.  This means that the MAIN problem that has to be dealt with is that at least one of the first two PRIMARY partitions must somehow be freed from what the C3100 startup scripts demand/expect in order for it to boot. Please read my original post and this one completely and you should completely understand the debacle. Normal instructions for partitioning and mounting for Linux in general and Zaurii specifically for the most part DO NOT APPLY here.  It is a problem unique to the C3000-C3100 series.

So far I think Bam is the only one who's grasped the signifigance of the situation and who also understands the C3100 limitations. It's VERY easy misunderstand the full scope of the issues here.  

A special THANK YOU to everyone who has PM'ed or emailed me trying to help, I appreciate your support tremendously and your encouragement is the only thing that's keeping me going at the moment on this project. Don't feel bad if you gave suggestions that have turned out not to apply- I appreciate the help and if you wanna take a new run at it just reread these two posts and I'd be thankful for anything you can learn in this direction. I value your time and your willingness to share it with me is a testament to your friendship.  

In a perverse way - it is always uplifting when I run into these "drive me crazy" issues at it reminds me how many friends I've got here on the forums. It's what makes this place great. =)


Frobnoid,

Thank you, but it's problem specific to the C3100.  Your suggested directions were almost identical to my 2nd attempt of 16 total attempts.  The problem lies herein and is twofold:

One.)  The first and second Primary partitions MUST exist as ext3 and MUST be 9mb in size apeice. Furthermore the number of sectors for each of those partitions must be identical to the original hard drive installation (20 apiece).  This gives us two useless 9mb partitions and only two other primary partitions to work with. Or one Primary with extended partitions. This is mandatory for the machine to boot at all. (Not what I WANT but mandatory nonetheless.)

Two.) On the C3000 and C3100 the three partitions that come factory on the original 4gb microdrive are mounted via scripts only and automatically.  If you screw with the partition tables of the first two partitions the unit will refuse to reboot. At all. You CAN change the variables for the third primary partition.  The unit will STILL refuse to reboot but when it halts you can press Ctrl-c to get it to continue and finish the boot.  However when it finishes booting you will find that you have no access to the third Primary partion at all and it is impossible to manually mount it.  The scripts I'm referring to try to manually force the third partition to be FAT32 and if it is not they simply refuse to mount it and prevent any other method of mounting.

Probably the key to everything that I'm butting my head up against at the moment is inside those scripts.  Unfortunately they are complex and WAAAY beyond my understanding - sinec these scripts control everything else during boot as well including access to the read only file systems and their compression/expansion I do not wish to simply "hack away" at them indiscriminently to see what will or won't happen.  You could easily lock yourself out at this point.  I don't wish to go there. Yet.

So the person that comes through for me on this will definitely either have to already HAVE a c3100 at their disposal or at least done quite a bit of work with one and can spout the needed changes off the top of their heads etc.  

For the good of the C3100 community someone needs to sort out what Sharp has left us screwed with.  It's just bizarre.  And all of the "problem areas" go back to the fact that for some reason the C3100's are being crippled down to the hard drive partition limitations that the C3000's were forced to live with because they had virtually no flash memory.  It's JUST LIKE Sharp to give a device 128mb of flash memory to fix the problem and then continue to enforce limitations on the hard drive structuring that are NO LONGER NEEDED due to the increase in Flash Mem! There is just simply NO NEED for Sharp limiting the C3100 in this way and forcing the user to have to deal with empty partitions that we cannot use nor modify! The loss of two primary partition slots and 18mb of HDD space for NO GOOD REASON is infuriating.

It is one of the the BLATENT little sharp edges that need knocked off and polished smooth for the good of the platform. And if I'm sizing it up correctly it is completely a matter of simple script-reworking and basic config file altering.  This should be EASILY well within the grasp of us.

BAM has pointed me to some more material that is at the heart of the issue, however it still seems beyond me by just a bit.  He is the original author of the "how-to" document I found in the Zaurus Wikipedia was tailored after.  The person that posted it tweaked it a bit for his personal use before posting and so I've finally determined why there are two parts that seemingly did not jell together in that document.  Bam has also stated above that he will also continue to look into this (and dangit if he can't do it with his background, knowledge, and skillz I'll be greatly surprised! He's given a HUGE amount of technical help to the Z community over quite a long period of time. ) BUT I don't know how much time he has to throw at this at the moment.

SO if ya wanna pick up some quick and easy money- or if you'ld like your favorite Zaurus Developer to get a nice Christmas Bonus, jump on in here and help me out!  I'll pay to the first person who can get us through this one even if someone comes behind you and does it better later.  I REALLY want to start messing with some projects that are going to require that 6mb drive and I'm completely stalled at the moment.

In the mean time I'm going to take my limited amount of skillz and go back to Bam's THE GRINDER and try to see if I can learn enough to save myself $50 bucks or so. (NOT likely!) I'm just not that smart.  But I may be that stubborn.

Frustratedly Yours,

-NeuroShock
Title: Nevermind. It's Been Fixed.
Post by: frobnoid_ on December 17, 2005, 09:12:21 pm
Quote
One.)  The first and second Primary partitions MUST exist as ext3 and MUST be 9mb in size apeice. Furthermore the number of sectors for each of those partitions must be identical to the original hard drive installation (20 apiece).  This gives us two useless 9mb partitions and only two other primary partitions to work with. Or one Primary with extended partitions. This is mandatory for the machine to boot at all. (Not what I WANT but mandatory nonetheless.)
Then add the following before the commands I've provided:
fdisk /dev/hda
n p 1 (enter) 20 # make partition 1. Make it 20 sectors in size.
n p 2 (enter) 20 # make partition 2. Make it 20 sectors in size.
t 1 83 # make partition 1's type  be linux
t 2 83 # make partition 2's type  be linux
w

(back at the shell prompt)
mke2fs -j /dev/hda1 # make partition1 ext3fs
mke2fs -j /dev/hda2 # make partition2 ext3fs

You've now got two 9MB partitions which come first on your drive.
Following the other directions will get you the other three partitions setup.

Quote
Primary partion at all and it is impossible to manually mount it.  The scripts I'm referring to try to manually force the third partition to be FAT32 and if it is not they simply refuse to mount it and prevent any other method of mounting.

This shouldn't be a problem, in my example the third partition remains FAT32.
Title: Nevermind. It's Been Fixed.
Post by: ThC on December 17, 2005, 09:32:44 pm
The scripts moutning the partition is the one I asked you in your other post ... please provide your /etc/rc.d/rc.rofilesys file and I'm pretty sure someone here will make it so you can have your disk layout (I'll make a try btw ...)
Title: Nevermind. It's Been Fixed.
Post by: loc4me on December 17, 2005, 09:36:18 pm
I have a C3000. Could someone explain to me what the purpose of the two 9mb partitions are on the hard drive. It has been said that they are not used at all on the C3100 so what is the purpose on the C3000? Thanks
Title: Nevermind. It's Been Fixed.
Post by: neuroshock on December 17, 2005, 10:13:25 pm
Quote
Quote
One.)  The first and second Primary partitions MUST exist as ext3 and MUST be 9mb in size apeice. Furthermore the number of sectors for each of those partitions must be identical to the original hard drive installation (20 apiece).  This gives us two useless 9mb partitions and only two other primary partitions to work with. Or one Primary with extended partitions. This is mandatory for the machine to boot at all. (Not what I WANT but mandatory nonetheless.)
Then add the following before the commands I've provided:
fdisk /dev/hda
n p 1 (enter) 20 # make partition 1. Make it 20 sectors in size.
n p 2 (enter) 20 # make partition 2. Make it 20 sectors in size.
t 1 83 # make partition 1's type  be linux
t 2 83 # make partition 2's type  be linux
w

(back at the shell prompt)
mke2fs -j /dev/hda1 # make partition1 ext3fs
mke2fs -j /dev/hda2 # make partition2 ext3fs

You've now got two 9MB partitions which come first on your drive.
Following the other directions will get you the other three partitions setup.

Quote
Primary partion at all and it is impossible to manually mount it.  The scripts I'm referring to try to manually force the third partition to be FAT32 and if it is not they simply refuse to mount it and prevent any other method of mounting.

This shouldn't be a problem, in my example the third partition remains FAT32.
[div align=\"right\"][a href=\"index.php?act=findpost&pid=107435\"][{POST_SNAPBACK}][/a][/div]

But on a 6gb drive you can only have a total of four primary partitions, or three primary with extended. (as Bam already pointed out.) This is preclusive as both the swap partition and whatever you choose for partition 3 both will also demand to be Primary. You would then have four primary partitions and that prohibits any logical ones- and you would have to have at least one more to meet my requirements.

Also this just addresses the partitioning. I also need to know how to properly copy the data from the old 4gb partitions to the new 6gb ones.
I've been told to use the dd command.  I'm totally disknowledgeable in it's use.  Can you enlighten me to its use? Or offer a better alternative?
I'm VERY thankful for your help and don't want you to give up on me!  I sincerely feel like you hold animportant part of the solution for me, and it's obvious ther is MUCH I can learn from you!

-NeuroShock
Title: Nevermind. It's Been Fixed.
Post by: bam on December 17, 2005, 10:31:01 pm
dd command, why not just cp -a /hdd1/* to wherever you want or perhaps download the tar backup od hdd1/2 then follow the instructions.(again look at my site, or here in the forums)
Title: Nevermind. It's Been Fixed.
Post by: frobnoid_ on December 17, 2005, 11:09:03 pm
Quote
But on a 6gb drive you can only have a total of four primary partitions, or three primary with extended. (as Bam already pointed out.) This is preclusive as both the swap partition and whatever you choose for partition 3 both will also demand to be Primary. You would then have four primary partitions and that prohibits any logical ones- and you would have to have at least one more to meet my requirements.

I'm not aware of any requirement that swap be a primary partition. With that said, I don't know that I've ever tried swap on an extended partition. If you've seen it stated as a requirement, can you point me at such a reference? (I'm always interested in learning something new)

Quote
Also this just addresses the partitioning. I also need to know how to properly copy the data from the old 4gb partitions to the new 6gb ones.
I've been told to use the dd command.  I'm totally disknowledgeable in it's use.  Can you enlighten me to its use? Or offer a better alternative?

The following should be sufficient (as would bam's suggestions):
With the 4GB drive in place:
dd if=/dev/hda1 of=~/hda1.dd # read all data off partition 1, store that data in "hda1.dd" in your homedir
dd if=/dev/hda2 of=~/hda2.dd # read all data off partition 2, store that data in "hda2.dd" in your homedir

With the 6GB drive in place, properly partitioned:
dd if=~/hda1.dd of=/dev/hda1 # read from ~/hda1.dd and output it to the new partition 1
dd if=~/hda2.dd of=/dev/hda2 # read from ~/hda2.dd and output it to the new partition 2

dd reads/writes the raw bytes from the filesystem ("if" is "input file", "of" is "output file").

If you put both disks in the Z at once, you should be able to do the dd directly without storing to the internal flash by doing:
dd if=/dev/ORIGINAL1 of=/dev/NEW1
dd if=/dev/ORIGINAL2 of=/dev/NEW1

where ORIGINAL is the location of your 4GB CF and NEW is the location of your 6GB CF. "hda" is the one internal to your unit. I don't whether the other will be "hdb" or "hdc" (If you have no SD card in, run "df -k" from the prompt, and you'll see which of the two is in use...)
Title: Nevermind. It's Been Fixed.
Post by: adf on December 17, 2005, 11:18:38 pm
I couldn't find the "how to disassemble a 3000/1000/3100" directions. If anyone could send me a link, i'd appreciate it.  6gig is here and 3100 was sent from Japan friday


I'm thinking that I'll just install pdax to the flash, then swap drives, then do a nice easy cfdisk....Assuming I can see how to open the thing up.
Title: Nevermind. It's Been Fixed.
Post by: neuroshock on December 17, 2005, 11:33:37 pm
Great. Cool, I've used both methods to copy drive contents via DD and CP (Thanks Frobnoid and Bam!) in a test run and both seem to work very well. Got that one knocked out- turned out to be a LOT simpler than I thought it would.  'Bout time things started going in that direction.


Hrmm. I'm like you Frobnoid, I've never tried a swap drive via a logical partition but that may just be what makes this thing feasible.  My apologies Bam, I see now that you were trying to say that very same thing in your PM to me.  I'm getting ready to give a go at a 9mb ext3/9mb ext3/(whatever the exact size etc. the stock FAT32 partition is/and then make two logical partitions with one formatted as SWAP and the other as EXT3.  This still doesn't truly deal with the issue being the 18mb of wasted space and two uneeded primary partitions but I think it's the best I'm probably going to be able to get until more users start upgrading their C3100's in the future with larger drives and someone comes up with a more elegant solution that addresses the first two primary partitions and the mounting scripts.

I'll post my results back here to this thread.  If this fails I'm going to fallback to  a 9mb ext3/9mb ext3/FAT32 with loopback device installed for ext3 usage/SWAP partitioning scheme.  Either that or dump the SWAP partition in favor of an ext3 one and use a Swap file instead.  I'll probably benchmark both ways to see which gives me a better performance scenario.


Loc4me  On the C3000 the first 9mb ext3 partition houses the system files that the C3100 keeps in Flash memory.  The second partition is also used for something important (can't remember right now.)  But basically since the C3000 has so little flash memory that model uses the first two microdrive partitions to handle this job instead.  The C3100 was given 128mb of Flash memory which is more than enough to handle those jobs without having to use space on the microdrive.  It also makes the C3100 much faster in some respects because most programs are loaded in Flash Memory rather than on the microdrive and data transfer from a cold call is virtually instantaneous as compared to having to get the data from a microdrive which has to spin up before transfer.  Once the drive is spinning however it's not so big of a speed differential.  So do NOT alter those partitions on a C3000!  They are "vestigial" on a C3100 and seem to hold no purpose other than to waste a small but noticeable amount of resources.  I'm sure in future ROM revisions etc. those resources and space will be recovered- but for now it's not generally a big enough issue to draw enough attention to get fixed.

Going now to try my hand at some of Bam's bag of tricks.

Thanks All,

-NeuroShock
Title: Nevermind. It's Been Fixed.
Post by: neuroshock on December 17, 2005, 11:59:14 pm
Adf,

I haven't seen the dissasembly page you're referring to but it's a piece of cake.  You will need one phillips jewelers screwdriver.

- Shutdown your C3100.
- Remove any CF card.
- Remove any SD card.
- Remove all cables etc.
- Flip the unit over.
- Remove the battery cover.
- Remove the battery.
- Remove the small black screws that hold the bottom cover in place. (There are 6 altogether. 4 in deep holes and two in the recessed battery cover area.)
- Carefully remove the bottom cover- It helps to pull on the side with the irda first as the headphone jack on the other side partially anchors it.  Just pull from that side first, it'll be intuitive from there.
- Note the postition and placement of the small black "switch" that locks the battery compartment in place- it only properly goes back together one way- just be aware to look for it at that time because it will probably drop out.
- You will then need to remove 4 silver screws from the motherboard layer you will then be looking at.  For reference the processor is under the silver metal shielding. You should also note that the holes that the silver screws have small "arrows" printed on the circuit board to show you which holes they come out of if you forget. (nice of them eh?)
- Then pull carefully on the edge of the board at the FRONT of the C3100.  This is important as you will need to get the edge of the board at an angle before pulling FRONTWARD as the thumbwheel, dc input, and other buttons extend out the back of the unit.
- IMPORTANT- at this point it's a handy time to gently work the serial port cover out of it's slot- it should become partially loose on it's own but it needs removed before the  board will release totally from the bottom cavity.
- Once the board is completely free it's also handy to open the screen at a 90 degree angle from the keyboard and lay the C3100 with the back of the screen on the table.  - The keyboard should be sticking straight up and the motherboard will then easily set down on the table without straining any cables.
- There are two silver screws with very fat heads that hold the SD card slot onto the motherboard (the SD slot is on a daughtercard that easily unplugs from the motherboard once the screws are removed.) Set this aside.
- You will then find a black bracket that holds the CF drive in place. It unsnaps at two points on the SD side of the motherboard.
- Once unsnapped tilt that side upwards and it will slide up and off the back of the CF card.
- Gently pull the CF card off of the pins.
- Exercise great care when you plug in the new drive to make sure the pins go in the right holes as there is no guide.

- Reverse process to reassemble.
Very, Very, easy.  

The only difficult part is the partitioning issue if you plan on altering it from the factory presets.

Personally I'm simply putting the 4gb drive into the case that the 6gb drive came in and storing it for safekeeping if I need to reinstall it for warranty issues.  After the warranty is over I'll use it for something I'm sure but until then I don't want to add to any headaches I may encounter if a warranty claim is required.  This way I can just reinstall the original drive, do a NAND restore and I'm ready to ship it off.

I hope this helps.  If you can't find any pictures and still feel you must have them before you proceed let me know and I'll try to take some for you and post them.

As always with something like this YMMV and if you screw the pooch - you're on your own so proceed with care, and patience!

Best of Luck!,
-NeuroShock
Title: Nevermind. It's Been Fixed.
Post by: adf on December 18, 2005, 12:48:09 am
A). Thank you very much!!

. Warranty....hmmmm  maybe I should keep it in one piece while under warranty..
or Tom61 gets his mods goin....
C).  If I decide to delay dissection, maybe I should use a usb wifi dongle?   I ordered a D-link dwl-122 any guesses on getting it to work and putting the 6-gig in the cf slot (for a 101-gig z)?

D)Then again... it SOUNDS easy enough... ( I'm one of those people who tends to have a "mystery" part left over after disassembling and reassembling somthing)


On the partition issues...  If you pulled the drive from a 3100 wouldn't it just be a 1000? That is to say, won't it run from flash?  (say with something far from sharp, like pdax or gpe?) If so, would't the idea be to set a basic OS up in flash and just partition the thing?
Title: Nevermind. It's Been Fixed.
Post by: neuroshock on December 18, 2005, 01:15:25 am
Quote
A). Thank you very much!!

. Warranty....hmmmm  maybe I should keep it in one piece while under warranty..
or Tom61 gets his mods goin....
C).  If I decide to delay dissection, maybe I should use a usb wifi dongle?   I ordered a D-link dwl-122 any guesses on getting it to work and putting the 6-gig in the cf slot (for a 101-gig z)?

D)Then again... it SOUNDS easy enough... ( I'm one of those people who tends to have a "mystery" part left over after disassembling and reassembling somthing)



On the partition issues...  If you pulled the drive from a 3100 wouldn't it just be a 1000? That is to say, won't it run from flash?  (say with something far from sharp, like pdax or gpe?) If so, would't the idea be to set a basic OS up in flash and just partition the thing?
[div align=\"right\"][a href=\"index.php?act=findpost&pid=107453\"][{POST_SNAPBACK}][/a][/div]

Great question Adf.  

If I'm understanding things correctly yes....and no.  If you search you will find a few C1000 users that have attempted to reflash their models to a C3100 with a mixed bag of success.  Their basic attempt being of course to get any benefits of the newer ROM/applications.  It seems to be the general concensus that the end result isn't worth the effort.  As far as I know nobody has tried to flash a C3100 back to a C1000 but more than a few ppl think it might be feasible. There are apparently a few minor issues that make the C3100 dependant upon the microdrive.  Just for curiosity sake I took a few minutes out to try booting my C3100 in both the standard Sharp ROM and Cacko without any microdrive whatsoever- no luck.  As soon as it detects it's not there it freezes the boot sequence and I can't get it to continue doing a Cntrl-c or any other method I know of.  However if it were flashed with a C1000's ROM I dunno. It might?
The only reason I could ever imagine trying that would be if I either ruined my microdrive and wanted to salvage it from one perspective or another, or if I decided to populate the interior CF slot with a Bluetooth or WiFi card. Hopefully the first won't happen. Ever.

Oh sorry forgot about your PM till just now about the dongle as well.  I'm using dongles for everything except for WiFi and Modem. So I don't know what WiFi dongles will work yet. I was referring to my bluetooth dongle in the PM I sent you which I discussed the dongle thing about. Incidentally I DO have CF bluetooth cards to try as well and would like to get a WiFi dongle so if your Dlink works well let us/me know!  That way I can use whatever peripherals etc. either in CF or usb interchangeably depending on what I finally decide works best.

-For blue tooth dongles I'm using a cheap-o IO gear off of ebay, a billionton w/antenna, and I also have a belkin dongle that I haven't tried yet but the first two work well.
-For 10/100 Ethernet I have the light blue one that is the same as Meanie has (look on his website for more info.)
-My Pretec CF modem works well.
-The WiFi card I'm using is a Dlink DCF 660W. I tried my Pretec card but had no luck with it yet.  I think I saw directions for how to get it working with Cacko however- just haven't gotten far enough to get that done yet.

Umm I think that's all I've tried to date.  All those that I mentioned that I know work are working on both Cacko 1.23 and the Sharp Rom.  Looks like I'll be staying with Cacko on my C3100 so I'll be donating to them soon. =)  It's nice. REALLY nice.

Hope this helps Adf,

-NeuroShock
Title: Nevermind. It's Been Fixed.
Post by: neuroshock on December 18, 2005, 09:08:39 am
Okay folks,

The hard part is done. I borrowed bits and pieces from everywhere and had many aborted attempts that didn't work out. However I have found a setup that I am content with for now. I must confess it doesn't actually address the issues of freeing up the two first primary partitions that are wasted partitions and wasted space on a C3100.  Even further I avoided the scripting problems of the first three drives by simply using a partitioning scheme that worked without altering them.  Eventually this needs to be permanently addressed but my personal skillz are not up to the task.

Bam and Frobnoid were not only the primary responders but eventually also the sources of most of the solution. A special thank you to both of you, and I've decided to divide the money between the two of you equally.  Please either PM me, or leave me a message in this thread as to what PayPal address you wish me to send the money to.  Frobnoid, if your insistent on not accepting the money please just choose a Zaurus developer that you think would deserve and benefit from the gift instead. (Such as Cacko, PdaXrom, Guylhem etc. just to name a few.) I'm sure they will be more than happy to see a few extra dollars roll in to keep the projects going.

Here's what I ended up with:

I replaced the interior 4gb Hitachi Microdrive with a new Retail version 6gb Hitachi Microdrive, model # HMS360606D5CF00.

My final fdisk partitioning scheme is:

Device           Start    End     Blocks          Id     System  
/dev/hda1      1          2        16033+         83    Linux             Primary 16mb ext3
/dev/hda2      3          4        16065           83    Linux              Primary 16mb ext3
/dev/hda3      5          521    4152802+     b      Win95FAT32    Primary 4.05gb FAT32
/dev/hda4      522      746     1807312+     5      Extended        Logical Drive
/dev/hda5      522      680     1277136       83    Linux              Extended 1.25gb ext2
/dev/hda6      681      746     530113+       82    Linux Swap     512mb Swap Partition

This exact setup neatly bypasses the auto-mounting issues with the first three drives. I have also seen no problems with my ext2 drive that I use for installing large programs and subsets such as X/Qt etc. The Swap Partition does indeed work with it in an extended partition and has exhibited no apparent problems either.

Incidentally while testing and developing this strategy I found that I could use either Fdisk or Partition Magic 8.0. I actually wish I had thought about it sooner. I simply plugged the 6gb Microdrive into a CF to 40pin IDE adapter I bought off of ebay about a year ago, (normally used to use CF flash memory in embedded project devices), and plugged the adapter into a 40pin ribbon cable on one of our desktops. Then I reset the cmos to boot from CD and voila! Resizing/sizing the partitions then became a piece of cake and Partition Magic even handled the formatting of the partitions for me. I did also try the 4gb CF card with the intent of then just copying the partitions from one Microdrive directly to the other but had no success because the reported geometry of the 4gb drive does not match it’s actual geometry and therefore Partition Magic won’t let you alter anything on it or read any data from it.

I likewise found several alternatives for moving the existing data from the original partitions to the new ones. I used dd, cp, with equal success and then later tried installing the Cxx00 version of TheKompany.com’s tkc-explorer.  I didn’t think until later that since Cacko already had Midnight Commander installed by default I probably could have used it as well. The bottom line here is that there is no secret to getting the data moved, just make sure you get it all moved and that you preserve any directories and symbolic links if there were any programs installed.  In my case I had no programs installed on the embedded hard drive as I’m starting at ground zero with a clean install of Cacko.

The above procedure will get you a clean boot without any reported errors and with the first three drives mounting fine and operating normally (They’re just bigger now.) I would suggest flashing/reflashing with Cacko (again if you’ve already installed it) if your planning to use Cacko after you install the larger Microdrive just to make sure that Cacko can freely make any changes/symlinks/etc. to it that it desires. It would truly suck to do all this work and then later find something was broken after a few hundred hours of loading programs and joggling settings etc.

Things yet to be done:

I still need to edit the scripts a bit to get the last two drives auto-mounted but now I can use a direct copy of the scripts posted by Bam on his site to accomplish that. (This was one of the reasons I put the swap partition last instead of directly after the FAT32 partition. If the Swap partition were closer to the center of the disk my performance would be a bit better when the swap file was being heavily used but this whole setup was a compromise and this one seemed a prudent one to keep me from having to make blind edits to the script.)

For now I’m manually mounting the extended partitions and manually turning the swap drive on via Swapon Swapoff.  Between editing the script to match Bam’s and a quick entry or two in another file I should have the Swap Drive automounting as well.

While this solution was neither elegant nor what I truly wanted it is the only way currently that I have found to meet my basic criteria. But I’ll be satisfied with this as it indeed preserved my ability to use my C3100 as a Windows USB storage device, gave me the ability to install programs to a native ext2/3 partition without the performance hit of a loop device being implemented, as well as providing the extra “memory” the Swap drive provides while offering better performance than I could have achieved with a Swap File. No major performance penalties. Good enough.

For the future:

I’d really like to see someone directly address the lame vestigial primary partitions.  This would free up a bit of space and allow for more advanced and flexible partitioning schemes in the future. A bit of performance would also be gained if Extended partitions were not our forced solution for the two most heavily used partitions on the actual drive, (the ext2 drive that programs will run off of and the Swap Drive.). The same can be said about reliability that is said about performance.  It’s not optimal as it is – not in the least.

I admit this is a “crutch-fix” meant to limp me and others like me that may go down this path along until someone can more thoroughly handle it permanently later on.  It is simply the best that I could do at my skill and knowledge level and it took a bit of growth and “reaching” for me to even come this close. I’m not the brightest bulb in the package. =\ But I’m stubborn.

I hope this helps others who may suffer from a similar lack of skillz but who still wish to upgrade their drives in the meantime.  I’ll try to post back to this thread any changes to scripts or files that need to be made to ultimately finish the job off so that everything is automated.

Thanks All!,

-NeuroShock
Title: Nevermind. It's Been Fixed.
Post by: bam on December 18, 2005, 10:36:18 am
Quote
Quote
But on a 6gb drive you can only have a total of four primary partitions, or three primary with extended. (as Bam already pointed out.) This is preclusive as both the swap partition and whatever you choose for partition 3 both will also demand to be Primary. You would then have four primary partitions and that prohibits any logical ones- and you would have to have at least one more to meet my requirements.

I'm not aware of any requirement that swap be a primary partition. With that said, I don't know that I've ever tried swap on an extended partition. If you've seen it stated as a requirement, can you point me at such a reference? (I'm always interested in learning something new)

Quote
Also this just addresses the partitioning. I also need to know how to properly copy the data from the old 4gb partitions to the new 6gb ones.
I've been told to use the dd command.  I'm totally disknowledgeable in it's use.  Can you enlighten me to its use? Or offer a better alternative?

The following should be sufficient (as would bam's suggestions):
With the 4GB drive in place:
dd if=/dev/hda1 of=~/hda1.dd # read all data off partition 1, store that data in "hda1.dd" in your homedir
dd if=/dev/hda2 of=~/hda2.dd # read all data off partition 2, store that data in "hda2.dd" in your homedir

With the 6GB drive in place, properly partitioned:
dd if=~/hda1.dd of=/dev/hda1 # read from ~/hda1.dd and output it to the new partition 1
dd if=~/hda2.dd of=/dev/hda2 # read from ~/hda2.dd and output it to the new partition 2

dd reads/writes the raw bytes from the filesystem ("if" is "input file", "of" is "output file").

If you put both disks in the Z at once, you should be able to do the dd directly without storing to the internal flash by doing:
dd if=/dev/ORIGINAL1 of=/dev/NEW1
dd if=/dev/ORIGINAL2 of=/dev/NEW1

where ORIGINAL is the location of your 4GB CF and NEW is the location of your 6GB CF. "hda" is the one internal to your unit. I don't whether the other will be "hdb" or "hdc" (If you have no SD card in, run "df -k" from the prompt, and you'll see which of the two is in use...)
[div align=\"right\"][a href=\"index.php?act=findpost&pid=107446\"][{POST_SNAPBACK}][/a][/div]


cool, I think I like the dd method better....(thought it would be more complex, but not really after seeing it(not much experience with dd[except swap creation])
Title: Nevermind. It's Been Fixed.
Post by: Cresho on December 18, 2005, 11:26:56 am
i was about to suggest partition magic 8 and also the cf slot drive..... been doing it for  years on my z
Title: Nevermind. It's Been Fixed.
Post by: raybert on December 18, 2005, 03:58:54 pm
Quote
... gave me the ability to install programs to a native ext2/3 partition without the performance hit of a loop device being implemented....

... A bit of performance would also be gained if Extended partitions were not our forced solution for the two most heavily used partitions on the actual drive, (the ext2 drive that programs will run off of and the Swap Drive.). The same can be said about reliability that is said about performance.  It’s not optimal as it is – not in the least. ...[div align=\"right\"][a href=\"index.php?act=findpost&pid=107465\"][{POST_SNAPBACK}][/a][/div]
I question the theoretical "performance hit" of these two scenarios, as well as whether their reliability is suspect.

As for extended partitions, I would think that any extra work to access these would be done at mount time.  Once they are mounted and the drivers know the addresses of the partitions I would expect there to be zero performance impact.  Why do you think differently?

As for loopback, the magic that takes place there is done in software and in RAM: I would expect that there are no additional device interactions that take place.  I'd bet that any performance impact would be hard to measure, much less sense on a human level.  Again, why do you think differently?

I think the same could be said about swap files vs. swap partitions.  (In fact, I would not be surprised to find that loopback is used to implement a swap file.)  I doubt you'd really experience any difference in performance.

And lastly, I see no reason for there to be any kind of reliability hit with either of these.  If they work, they work.  What would make them any less reliable than other solutions?

The one thing about your set-up that would bother me is the two "vestigal" partitions.  They do no harm except wasting some space, but it's somewhat ugly to have to keep them.  I would expect though that this can be fixed easily if it truly is only scripts that control initialization.  OTOH, if Sharp stuck something boneheaded in their proprietary code, you'll probably be living with this for a while.

Anyway, glad to see you got your system working.  Good luck with it.

~ray
Title: Nevermind. It's Been Fixed.
Post by: adf on December 18, 2005, 04:06:44 pm
Quote
OTOH, if Sharp stuck something boneheaded in their proprietary code, you'll probably be living with this for a while.

This is why I suggested pdaxrom or OZ for the experiment. I bet they did just that.
Title: Nevermind. It's Been Fixed.
Post by: polito on December 18, 2005, 05:24:44 pm
Actually, the two 9MB partitions aren't useless. If you've ever done an 'ls -la' on them you'll find that they've got a .sys folder in them which has some tarballs.

From what I recall they're used when the system boots into the backup/restore system and launches some sort of rudimentary operating environment which only handles the backup and restore so it can get a reliable backup/restore without files being in use, etc.

Please note that what I say here is pieced together from my rather interesting memory and having scanned a few posts about them somewhere that I can't remember. But the main thing is that the partitions do have a use and I can't remember whether or not the special areas get the .sys folder and other tarballs recreated in them or not.

Just figured I'd throw my 50 cents in  I do agree that it's rather lame to have goofy baby partitions... perhaps it's something like the PC Bios limitation on accessing files above 1024 cylinders in order to boot? I remember having to create small 16mb /boot partitions to hold nothing but the kernel and some other system files so that LILO could boot linux. I don't know if there's something similar with ARM or not. Maybe sharp just figured two little required partitions would be the only way they could guarantee that certain system things would just be there and they wouldn't need to worry about it *SHRUGS*
Title: Nevermind. It's Been Fixed.
Post by: neuroshock on December 19, 2005, 01:26:32 am
Quote
Quote
... gave me the ability to install programs to a native ext2/3 partition without the performance hit of a loop device being implemented....

... A bit of performance would also be gained if Extended partitions were not our forced solution for the two most heavily used partitions on the actual drive, (the ext2 drive that programs will run off of and the Swap Drive.). The same can be said about reliability that is said about performance.  It’s not optimal as it is – not in the least. ...[div align=\"right\"][a href=\"index.php?act=findpost&pid=107465\"][{POST_SNAPBACK}][/a][/div]
I question the theoretical "performance hit" of these two scenarios, as well as whether their reliability is suspect.


Great questions. (I'd expect nothing less mind you!) With some fairly easy answers with real world arguments--- and some that are more founded in my pet peaves than in huge performance hits.  Honesty does us all good. =)

Please bear in mind up front that this partitioning configuration scheme is helpful to me because I will be using extremely demanding X software via X/Qt that demand more system resources than the C3100 can normally give.  This heavy usage greatly amplifies the performance hits that I would take as a result of these issues than compared to a casual user or someone who only uses Qtopia based programs written specifically for the Zaurus.

So here we go-

As for this first question this is simple. The main performance hit comes from drive geometry issues and hardware performance from a hard drive's perspective more than the CPU or software, however the first portioin I'll discuss are on the CPU and Software end of things. Data being delivered to drive partitions are routed by priority. (Much like irq's establish priority for devices recieving the CPU's attention and therefore bandwidth.) So Primary partitions recieve primary routing. Extended partitions must take back seat to Primary partitions when routing conflicts occur.  And they occur a LOT in IDE implementations.  Further, Extended partitions are just that- extended partitions that are extended FROM a primary partition.  Actually it would be more accurate to say they are extended THROUGH the Primary partition.  For Extended partitions, not only do all routing calls have to be delivered through the Primary partition, but by definition extended partitions sit on a Logical partition also.  As you were inferring about Swap Drives possibly being a loopback device (and we'll address that shortly) the extended partitions are somewhat of a similar loopback device system that sits upon a Logical (in this case meaning "doesn't really have a physical address on this side of the interface", remember the Primary partition is providing the actual calls for Logical Partition access and then the interface tells the head/servo where to go), drive.

So to recap - any data that comes/goes to an extended partition must first wait for any Primary partitions to clear the route.  Then just in order to get/put the data in the right place the processor on the HDD controller has to calculate from the actual physical geometry what the "advertised geometry" would need to be for the extended partition and then repeat this process for each data pack. It becomes very processor/controller intensive very quickly.  It's why Primary partitions are almost always preferred for OS's to boot from.  Ditto for Swap Partitions. It's why IDE bogs down so badly compared to SCSI and later interfaces, ESPECIALLY when you also have a Primary and Secondary hard drive on the same Channel.  This is because the Primary drive (Master) provides controller services for  BOTH the Master and the Slave disk.  This is why it's so much faster to copy from a Master drive on one channel to a Master drive on the second channel rather than from a Master to a Slave.  They can't transfer data via the Master and Slave simultaneously as the Master controller provides all translation services and can only handle one at a time.  This is the same issue as our Primary/Extended drive issue just on a whole other level. A last quick note in response to a question concerning this, the Master/Slave issue can be resolved by Cable Select negotiations on a modern or "current" IDE interface - if everything works together. But for our purposes Microdrives still only adhere to "yesteryear" performance specifications of the ATA-33 and prior implementations that almost always HAD to have Master/Slave configurations.

We can sum all of the above up to be "translation overhead" that is dramatically increased when the most used partitions are also on extended/Logical partitions. You are just introducing two more translation levels as well as the lower priority issues as compared to avoiding all of it by putting that data on Primary partitions in the first place. This "fault" if you will can be rooted in the OS's drivers as well as the initial hardware interface translation on the controller itself.

That's the hard part. The easy part of the answer to this question is much more simple to understand for most people. In most drive geometries Primary Partitions almost always get the "favored location" for data.  The two "favored locations” are at the first track and the middle tracks.  This is because the physical head of the drive is most often over those two tracks - much more so than anywhere else on a drive.  This is why just about every operating system in existence that uses physical hard drives as their operating medium by nature will put it's mostly accessed files on the first or middle tracks IF the user partitions the entire physical drive to one large partition.  Things get muddled really fast with multiple partitions are used as the OS has no real way of knowing where the new physical First and Middle tracks are located.

But one thing is ALWAYS true. Extended/Logical partitions are NEVER located on the first track and are usually located PAST the Middle track as well in REAL WORLD APPLICATION simply by fact that they are almost always placed AFTER the primary partitions are physically on the Drive.
Quick second half recap - data that is placed AFTER the Middle physical track takes longer to get to simply because the servo arm/head has to travel farther out of it's normal range to get to the track. Period.

Add the two together and you end up with a worst case scenario for a HDD with a platter geometry. Not only does it take the CPU and software drivers and the CPU on the HDD controller longer to translate HOW to get to the data- but once it does it takes the physical servo arm LONGER to travel to the spot it needs to get it from. You'll immediately notice that one of these problems are contained within the OS and its drivers and the other is completely within the IDE HDD controller itself.

Does it contribute to real world performance hits in physical Hard drives? You betcha.  These are known basic issues that have been around for as long as hard drive technology itself.  Most end users and even programmers don't know the details of WHY certain partitioning schemes give better performance but it's been ground into the community for ages to simply do things like Put your OS and Swap files in the earliest Primary Partition available. (This is also where the old "but still true" mantra of "put your swap partition on the earliest Primary partition of your least used physical drive" for best server performance comes from.)  But feel free to run your own performance tests if you doubt the rationale here, it never hurts to not take someone elses word for granted!


Quote
As for extended partitions, I would think that any extra work to access these would be done at mount time.  Once they are mounted and the drivers know the addresses of the partitions I would expect there to be zero performance impact.  Why do you think differently?


Oops I already covered most of this above.  Also the software drivers only know the "advertised" or in our case "LBA" addresses of anything on the hard drive.  The cpu on the HDD controller must then translate from LBA into the actual physical drive geometry.  Hence the bottleneck and performance hit explained in long form above.  Your software drivers of ANY OS that uses a modern IDE HDD is completely blind to the actual drive geometry.  Even the CMOS of your desktop computer is blind to it and only knows/uses what the LBA geometry is that is reported from the HDD itself. The CPU on the HDD controller then translates the value that the OS calls for to the real physical value on the HDD.  The reason this is done this way is to overcome “would-be” geometry limitations like we used to have back with early IDE, RLL, and MFM drives.  It’s also the general basis of the problems and solutions of operating systems being able to recognize drives beyond a certain size/geometry.  The actual drive geometry is COMPLETELY known only to the physical electronics of the HDD controller that is mounted on the drive and is never exposed to the Operating System.  In this way the OS can use HDD’s with capacities MUCH greater than the System Builders or Operating System engineers ever imagined possible when they released their products. The HDD controller (that is mounted on the drive itself for IDE drives) does all the work for this and in doing so also becomes our performance bottleneck here.


Quote
As for loopback, the magic that takes place there is done in software and in RAM: I would expect that there are no additional device interactions that take place.  I'd bet that any performance impact would be hard to measure, much less sense on a human level.  Again, why do you think differently?


You're exactly right "the magic that takes place there is done in software and in RAM". I couldn't have said it better myself.  And because of this both the software and the RAM required to make this loopback device translation require extra CPU cycles and CPU as well as memory bandwidth.  By definitition any thing that you add that requires additional software/RAM to handle will add processor overhead and incur a performance hit.

However in certain circumstances you have a valid point.  With pure flash memory- specifically with sdcard's when used with Zaurii are affected by this.  Because of Sharp's rediculous insistence on MMC compatibility mode implementation of the SD card slots the performance of any particular SDcard may be severely limited because of this bottleneck. For example a SD card that is advertised as 10x speed may be a bit faster than a normal SD card in a Zaurus SD slot, but a 32x speed card will offer no more performance gain than a 10x card because of this enforced bandwidth limitation.  Because of this bottleneck you can use a SD card formatted FAT and offer yourself ext2/3 storage availability via a loopback device without hardly any performance hit at all.  Testing by OESF members have  put the entire performance hit at about 1% of the bandwidth being used in these SD transfers.  So it can be a pretty smart move to use a loopback device with a SD card on a Zaurus.

However the CF slots do not have this natural bottleneck limiting performance.  Because of this the percentage hit increase when using a high speed CF device with a loopback device floating on its partition CAN be very substantial.  The faster the CF device, the worse the performance hit.  The bandwidth/CPU overhead performance hit for using a loopback device on a FAT formatted CF device can rise as high as 30-33%!  Especially when you are using the CF device to run something large that runs completely off the CF drive partition in ADDITION to a SWAP partition can easily incur this sort of hit. (BTW this is regardless of Primary/Extended placement.)  A good example of an application that would fit this scenario would be running X/Qt and a Swap Partition on the same CF drive using a loopback device. OUCH. Almost 100% of your data bandwidth calls have to be routed through this loopback translation and the SOFTWARE and RAM that provide this magic have to steal processor cycles from your CPU for each and every packet. And every time it does there are less CPU cycles and RAM available for running the actual program your using.

You can find most of the information that you would need to look into this further or verify any of the above info right here on the oesf forums.  Just do a search for sdcards, and loopback devices etc.- it’s how I actually found out that the performance hit was so low for sdcards in the first place, (much to my surprise at the time.)


Quote
I think the same could be said about swap files vs. swap partitions.  (In fact, I would not be surprised to find that loopback is used to implement a swap file.)  I doubt you'd really experience any difference in performance.


I’ll try to keep this one brief simply because of how well known of a performance issue it is. (Nobody pass out here- I know I’m not brief often.)  The difference between using a swap file verses a swap partition is very real and very measureable.  The greater the intensity of which the partition/file that resides within the loopback device is used the greater the performance hit.  You can search for good info on this very topic right here on these forums as well.  The fact that in this case is that very performance hit is being amplified by the Extended partition/translation overhead issue etc.also and only makes the performance hit that much larger.

You do bring up an interesting point about the Swap and loopback issues. I can clarify it for you a bit. The analogy is EXACTLY correct when applied to a swap file as it is simply a swap partition formatting being superimposed over a regular drive partition – and this magic is accomplished of course using the magic of Software and RAM. Sound familiar?  Again any layer of translation is always accomplished by an additional layer of software that uses additional RAM (ironically this also works your swap file/swap partition that much harder) and stealing cycles from your Zaurus’s CPU and available bandwidth all the while.  A Swap Partition on the other hand is a partition that must be formatted either by the user after it’s creation or by the system during it’s first implementation. In that respect it’s just like any other kind of partition and dislike a loopback device- no translation layer is needed. So you had the right idea you were just applying it over too much of a general area.


Quote
And lastly, I see no reason for there to be any kind of reliability hit with either of these.  If they work, they work.  What would make them any less reliable than other solutions?


The reliability issues here quite frankly are MUCH more difficult for me to explain away because the truth is – they are NO WHERE NEAR as great an issue as the performance issues are.  You’ve got me cold on this one I must admit.

The only shred of evidence that I’ll proffer in this respect is that if the primary partition(s) that the Extended partitions are attached to OR the Logical partition that they themselves float on becomes corrupted the Extended partitions are most likely laid to waste as well. This doesn’t happen often, and even when it does with modern IDE technology it’s usually somewhat recoverable.

To sum it up I quickly tossed out the “reliability” card onto the table and equated it to the performance issues and did so without thinking it through and in doing so unjustly represented the facts. Thank you for pointing this out – if we don’t hold ourselves accountable when we’re incorrect, we lack the integrity to be believed when we are!


Quote
The one thing about your set-up that would bother me is the two "vestigal" partitions.  They do no harm except wasting some space, but it's somewhat ugly to have to keep them.  I would expect though that this can be fixed easily if it truly is only scripts that control initialization.  OTOH, if Sharp stuck something boneheaded in their proprietary code, you'll probably be living with this for a while.


I agree completely and whole heartedly.

To readers of this post/thread let me take a moment out to turn things around and completely defend the right of Ray’s questioning my performance issues. My line of logic and what he was probably basing his doubts upon are two differing technologies.  In his defense all of the CF devices based performance issues that we’ve discussed in this posting would completely flipflop if we were talking about CF Flash Memory cards rather than Microdrives specifically!  Almost all of the performance penalties that I’m complaining about are unique to an actual physical hard drive with physical heads, servo’s and spinning platters and their electronic components that control their movements!  If we were to be talking about CF Flash memory cards instead then just about 100% of these performance hits that are the subject here would not exist because the controlling circuitry is VERY different and CF flash cards have no major moving parts whatsoever. Keep in mind that Microdrives are just that- they are miniriature HDD’s in every respect- just on a much smaller scale. So don’t be too quick to think he was completely in left field for putting forth his doubts.

If these topics interest you either way, I would encourage you, the reader, to not take either of our words on this topic as gospel truth but rather spend  a half hour or so poking around the forums here and the internet in general- you’ll end up with a MUCH better understanding of how hardware and software issues affect your end performance on your Zaurus. Many of these things are things that you the user can easily control on your Zaurus and by using your resources and setup properly can see nice performance gains without any additional monetary expenditure. And THAT is ALWAYS a good thing!

Something else to note is that HOW you use your Zaurus and what you use your Zaurus FOR will impact greatly on whether you personally see any real world performance gains.  In my case I will be using X/Qt and some X based programs that demand desktop/server level memory and storage resources in order to perform well.  Because of this the things I’ve discussed matter a LOT in how fast my Zaurus will perform under such a load. And since these things ARE something I can control. I’ve chosen to do so as much as possible since things like upgrading my C3100 to a faster CPU and/or more physical RAM are impossible options to me at the time of this writing.  However if you are someone who is more apt to use streamlined native Qtopia programs written specifically for your Zaurus you may never even need a Swap File or Swap Partition etc. in the first place!  As a matter of fact if you do not normally use enough RAM to warrant the need for one, installing them will only DEGRADE the performance of your Zaurus.  So for my particular usage these matters, strategies and precautions make sense.  For others they may not!

I also must close by confessing that ANY performance inhibiting thing that exists in my Zaurus that I feel should or could be changed drives me CRAZY until it is fixed. I am an absolute performance nut, overly zealous - a performance junkie I suppose.  While every point that I’ve made is true within it’s own context several of these issues are difficult enough to set up that many users would simply not find ANY performance boost justification enough to go to the trouble to tackle them.  This is even more so true if the boost would be minimal since their normal Zaurus usage doesn’t push the resources already available beyond normal usage limits.


Quote
Anyway, glad to see you got your system working.  Good luck with it.
~ray
[div align=\"right\"][a href=\"index.php?act=findpost&pid=107495\"][{POST_SNAPBACK}][/a][/div]


Thank you! I’m very glad too, and as always I wish you and everyone the best with theirs.  Please don’t feel that I went to all of this trouble to be confrontational, rather I was excited and overly thrilled that for once someone was asking questions that I had intimate knowledge and the ability to give detailed and hopefully helpful answers to you and other users that may trip over this post! (This doesn’t happen often.)

So thank you for the opportunity it has afforded me to help anyone who may learn from this info.  It makes me feel better to have a few tidbits to give back to the community that I take so much from so often.

For anyone who’s interested the majority of the knowledge expressed in this post was from working as a line technician in a robotically driven IDE storage manufacturing facility for several years. I may not know much- but what I do know I know pretty well. =)

Cheers!,
-NeuroShock

EDIT: The first response was edited for clarity when a reader pointed out that part of the blame lay on the OS/driver side of the issue as well as on the HDD controller. This has been corrected. (Thanks for the keen eye and quick heads up.)
Title: Nevermind. It's Been Fixed.
Post by: Meanie on December 19, 2005, 07:21:22 am
This is one of my future projects for when I get a bigger cf card or have time on my hands.

Since the C3100 is not dependent on the partition geometry (ie sizes) but rather the partition names (/hdd1, /hdd2, /hdd3) it is possible to just resize those partitions.

I plan to make /hdd1 my swap partition, /hdd2 ext3 partition for applications and /hdd3 fat32 for file storage and usbdisk.
Title: Nevermind. It's Been Fixed.
Post by: speculatrix on December 19, 2005, 08:08:34 am
a quick note about using dd, "cp -pr" and tar.

dd is a great way of copying the raw data which makes up a file system. Unfortunately, it's also very dumb - it copies used and unused blocks alike, so a disk partition of 4GB with only one file on it will still create a 4GB dump. Of course, you can compress the output of dd quite successfully. Creating a single very large file filled with zero can help a lot here, as it ensures as much of the disk is filled with zero as possible. dd is only suitable for copying a disk to another disk when the partition sizes are the same, otherwise you can have some very odd problems.

"cp -pr" will indeed copy a filesystem. The snag is that it doesn't understand symbolic links, so if you have say
  libsomething.so, libsomething.so.1, libsomething.so.1.1
where the two former are a soft link to the latter, when you do the copy you'll end up with three files, not two.

tar is often the best way to copy a file system, as it can not only preserve ownership but also symbolic links:
  cd olddir
  tar cf - . | (cd newdir ; tar xf -)

what this does is to tar up the current directory and downwards, sending stdout (writing the tar) to a pipe, then in another process CD'ing to the destination, and unpacking the tar file from stdin (this is what "-" means... either write to stdout or read from stdin).

using tar like this is perhaps the best way to copy a disk filesystem from one place to another. Strictly speaking, you should use "tar xfBp -" to unpack, because it blocks on read; it's usually the default in most systems when the input is specified as stdin (the "-" char says read from input).

you can also make backups like this:
    tar cf - . | gzip > /tmp/mybackup.tar.gz

or even copy the filesystem from one machine to another:
   tar cf - . | gzip | ssh othermachine "cd newdir ; gunzip | tar xfBp -"

note that gzip and gunzip are usually the same file, with a softlink from one to the other, and the program works out which one is which when run.

hope this helps
Paul
Title: Nevermind. It's Been Fixed.
Post by: bam on December 19, 2005, 11:52:34 am
this is perhaps the most useful thread I have ever read, with your guy's ok I will copy sections to my site especially the hard-drive/swapfile-drive/loopback-device, Great Work Neuro!
Title: Nevermind. It's Been Fixed.
Post by: speculatrix on December 19, 2005, 12:21:13 pm
more on dd, tar, cp

on linux, you can use "dump" to dump a filesystem to a backup device... on solaris, this is called "ufsdump" to make a backup of the file system; this is a bit more robust than tar - it works at a lower level. I'm not sure if dump has been built for the Z.

there's also a command called "cpio" which is more powerful than tar; I very rarely use it though, but it's worth being aware of it if you want to control how and what to archive more flexibly than with tar.
Title: Nevermind. It's Been Fixed.
Post by: neuroshock on December 19, 2005, 12:28:09 pm
Bam,

I agree - VERY useful info flying around everywhere in this thread. I've learned a LOT myself.  By all means you have my explicit permission to reuse any portion of what I've posted that you may find useful to yourself or others.
Your site is a wonderful repository of knowledge and a great asset to the Zaurus community.

Have a Great Day All!,

-NeuroShock
Title: Nevermind. It's Been Fixed.
Post by: bam on December 19, 2005, 12:32:12 pm
Quote
This is one of my future projects for when I get a bigger cf card or have time on my hands.

Since the C3100 is not dependent on the partition geometry (ie sizes) but rather the partition names (/hdd1, /hdd2, /hdd3) it is possible to just resize those partitions.

I plan to make /hdd1 my swap partition, /hdd2 ext3 partition for applications and /hdd3 fat32 for file storage and usbdisk.
[div align=\"right\"][a href=\"index.php?act=findpost&pid=107576\"][{POST_SNAPBACK}][/a][/div]



can you put a directory on a swap partition? ie .sys?

cool Neuro, put it over there already...good stuff!
Title: Nevermind. It's Been Fixed.
Post by: cybersphinx on December 19, 2005, 01:15:39 pm
Hm... some of your explanations are completely contradictory to what I know about computers (PCs mainly, some things might be different on other platforms, though I don't know why they should be).

Quote
As for this first question this is simple. The main performance hit comes from drive geometry issues and hardware performance from a hard drive's perspective more than the CPU or software. Seeks being delivered to drive partitions are routed by priority.

That's the first time I hear this. You make it sound like every partition is a separate device, which gets addressed separately on the bus itself (i.e. in hardware). But (as far as I know, but I'm pretty sure of that) the hardware only knows about the whole disk, the partitioning just concerns the software. So every access to the disk gets handled when it arrives (well, perhaps not anymore, since the drive firmware probably does some optimizing the access). Any prioritization in partition access will (or will not) be done in software.

Quote
(Much like irq's establish priority for devices recieving the CPU's attention and therefore bandwidth.) So Primary partitions recieve primary routing. Extended partitions must take back seat to Primary partitions when routing conflicts occur.  And they occur a LOT on an ide bus.

Like I said, the IDE bus doesn't know anyting about partitions, so there are no partition based priorities.

Quote
Further, Extended partitions from a drive geometry translation perspective are just that- extended partitions that are extended FROM a primary partition.  Actually it would be more accurate to say they are extended THROUGH the Primary partition.  For Extended partitions, not only do all routing calls have to be delivered through the interface via the Primary partition, but by definition extended partitions sit on a Logical partition also.

On a usual PC harddisk there can be four primary partitions (defined in the master boot record's partition table), for compatibility with DOS-based systems (up to Windows ME, and the NTs probably haven't changed anything there for compatibility's sake), and usually (there are exceptions) DOS-based systems can only see one one those. To get around the four partition limit (and to get more than one partition in DOS) logical partitions were invented. Those are the same than primary partitions, but include a partition table themselves.

A pure Linux system can work without any partitions, you can just use the whole device and create a file system on it (like "mkyourfavouritefs /dev/hda; mount /dev/hda /mnt"). Or use a non-DOS partitioning scheme, which probably don't have those limitations in the first place. Of course, your disk will be incompatible with DOS-systems then, but who cares?

Here are two links about partitions: http://www.ranish.com/part/primer.htm (http://www.ranish.com/part/primer.htm) and http://www.lissot.net/partition/partition-03.html (http://www.lissot.net/partition/partition-03.html).

Quote
As you were inferring about Swap Drives possibly being a loopback device (and we'll address that shortly) the extended partitions are somewhat of a similar loopback device system that sits upon a Logical (in this case meaning "doesn't really have a physical geometry", remember the Primary partition is providing the actual calls for geometry access when the head/servo need to know where to go), drive.

The only translation that's done is from LBA to the actual drive geometry in the drive's controller, that shouldn't be a performance issue (but could be, given the usual stupidity in PC hardware).

Quote
It's why Primary partitions are almost always preferred for OS's to boot from.  Ditto for Swap Partitions.

That comes from the time when the fastest transfer rate was on the first sectors of a disk. Nowadays you can't usually say where access is fastest, since you don't know which logical addresses are mapped to which physical sectors.

Quote
It's why IDE bogs down so badly compared to SCSI and later interfaces,

That's because SCSI is more intelligent about data transfers, especially if there are lots of devices involved. IDE was just the cheaper and fast enough solution for the masses.

Quote
ESPECIALLY when you also have a Primary and Secondary hard drive on the same Channel.  This is because the Primary drive (Master) provides controller services for  BOTH the Master and the Slave disk.

Quoted from http://en.wikipedia.org/wiki/Advanced_Technology_Attachment (http://en.wikipedia.org/wiki/Advanced_Technology_Attachment): "Although they are in extremely common use, the terms master and slave do not actually appear in current versions of the ATA specifications. The two devices are correctly referred to as device 0 (master) and device 1 (slave), respectively. It is a common myth that "the master drive arbitrates access to devices on the channel." In fact, the drivers in the host operating system perform the necessary arbitration and serialization. If device 1 is busy with a command then device 0 cannot start a command until device 1's command is complete, and vice versa. There is therefore no point in the ATA protocols in which one device has to ask the other if it can use the channel. Both are really "slaves" to the driver in the host OS."

The problems with two devices on the same bus are: 1. Only one device can use the bus at the same time, and 2. The bus runs at a speed both devices support, so a slow device limits the speed of a faster one. (Both might have changed in the last years, I don't really know. But it surely is the base for the "two devices on the same bus are slower than on two busses" saying.)

Quote
This is why it's so much faster to copy from a Master drive on one channel to a Master drive on the second channel rather than from a Master to a Slave.

When the devices are on two busses, both can be accessed at the same time, while on one bus one device always has to wait for the other to have finished its transfers.

Quote
We can sum all of the above up to be "translation overhead" that is dramatically increased when the most used partitions are also on extended/Logical partitions. You are just introducing two more translation levels as well as the lower priority issues as compared to avoiding all of it by putting that data on Primary partitions in the first place.

The only extra "translation" that is done when accessing logical partitions is that the addresses have to be read from the partition table in the extended partition in addition to the one in the master boot record.

Quote
That's the hard part. The easy part of the answer to this question is much more simple to understand for most people. In most drive geometries Primary Partitions almost always get the "favored location" for data.  The two "favored locations” are at the first track and the middle tracks.  This is because the physical head of the drive is most often over those two tracks - much more so than anywhere else on a drive.  This is why just about every operating system in existence that uses physical hard drives as their operating medium by nature will put it's mostly accessed files on the first or middle tracks IF the user partitions the entire physical drive to one large partition.  Things get muddled really fast with multiple partitions are used as the OS has no real way of knowing where the new physical First and Middle tracks are located.

But one thing is ALWAYS true. Extended/Logical partitions are NEVER located on the first track and are usually located PAST the Middle track as well simply by fact that they are almost always placed AFTER the primary partitions are physically on the Drive.
Quick second half recap - data that is placed AFTER the Middle physical track takes longer to get to simply because the servo arm/head has to travel farther out of it's normal range to get to the track. Period.

Well, nowadays that's not necessarily true anymore, since one logical address can be (almost) everywhere physically on the drive, and the data mapping can differ between drives, as well (see http://www.lissot.net/partition/mapping.html (http://www.lissot.net/partition/mapping.html)).

Quote
You'll immediately notice that both of these problems are contained within the IDE HDD controller itself and have little/nothing to do with the actual CPU, bandwidth etc. of your Zaurus.

That's only half true. There should be no performance penalties for using a logical partition instead of a primary one (provided both use the same area of the disk).

Quote
Quote
As for extended partitions, I would think that any extra work to access these would be done at mount time.  Once they are mounted and the drivers know the addresses of the partitions I would expect there to be zero performance impact.

Right.

Quote
The HDD controller (that is mounted on the drive itself for IDE drives) does all the work for this and in doing so also becomes our performance bottleneck.

But it should be able to do this as fast as the interface requires (except perhaps on some really cheap drives - after all, it's still PC hardware...).

Quote
Quote
As for loopback, the magic that takes place there is done in software and in RAM: I would expect that there are no additional device interactions that take place.  I'd bet that any performance impact would be hard to measure, much less sense on a human level.  Again, why do you think differently?

You're exactly right "the magic that takes place there is done in software and in RAM". I couldn't have said it better myself.  And because of this both the software and the RAM required to make this loopback device translation require extra CPU cycles and CPU as well as memory bandwidth.  By definitition any thing that you add that requires additional software/RAM to handle will add processor overhead and incur a performance hit.

Loopback devices (and swap files) have a certain performance hit, because every access has to be done through a file system, which is noticeably more complex than directly accessing a device itself. This gets more noticeable when the device gets faster in relation to the main CPU doing all the work (as in the Zaurus, where you have a relatively slow CPU).

Quote
Quote
And lastly, I see no reason for there to be any kind of reliability hit with either of these.  If they work, they work.  What would make them any less reliable than other solutions?

I guess there is a larger chance of things going wrong when going through a file system, but that shouldn't be an issue (I wouldn't use such that file system for anything else then).

cybersphinx

PS: Damn, somewhere I screwed up the quoting, but I don't see where. Sorry for that.
Title: Nevermind. It's Been Fixed.
Post by: frobnoid_ on December 19, 2005, 07:48:20 pm
Quote
"cp -pr" will indeed copy a filesystem. The snag is that it doesn't understand symbolic links, so if you have say
  libsomething.so, libsomething.so.1, libsomething.so.1.1
where the two former are a soft link to the latter, when you do the copy you'll end up with three files, not two.

tar is often the best way to copy a file system, as it can not only preserve ownership but also symbolic links:
  cd olddir
  tar cf - . | (cd newdir ; tar xf -)

cp -d will not dereference symlinks, that is, if the input is a symlink, the output will also be one.
Title: Nevermind. It's Been Fixed.
Post by: neuroshock on December 19, 2005, 08:32:48 pm
Cybersphinx,

I just got back from a Christmas shop-a-thon with my daughter. I'm completely exhausted and headed for bed. I gave reading through your reply's a shot but with the formatting awry it just made my head spin trying to decide who said what, when, and where. Don't feel bad though the Quote bug has bitten us all a time or three.

I'm pretty confused on most of the questions and replys as sometimes what you say in your question seems to agree with what I said in the previous statement even though that would seem contradictory.  Several of the questions you posed are just a matter of clarifying as we're approaching the same beast from different ends.

There are a few things that jump out at me though. As you pointed out you are definitely correct in the generalization that the IDE controller is blind to partition Priorities as that is indeed handled by the OS and drivers.  I didn't proofread well enough and included that inside of the discussion about physical geometry limitations and head travel speed performance hits etc.  Rather that should have been specified as to being an exterior limiting process- that is indeed a facet that would NOT be a bottleneck of the IDE controller but rather a performance issue related more to the software of the drivers/OS but the overal mechanics are still the same and the priortization problem still exists and therefore the overhead that it creates very real nonetheless.

Um also I stand by the Extended partition track statement.  Since as far back as 1992 I cannot remember ever finding a single computing device that used an IDE HDD device with an extended partition on track one. With modern linux/unix variants I suppose it is possible but because of the overhead noone would intentionally do this. Yes, you might gain a meager speed boost by having an extended partition on track one just so that you can say you did it but you'll lose even more performance by introducing a Logical partition in that position for all the reasons I already stated. Theoretically yes. Real world - never. If you were correct and this would fix the performance issues it would be found everywhere.

Oh and the key word from your quote from the wikipedia is "current".  Really- try data transfers from Master to Slave vs Master to Master yourself. (oh and btw Master and Slave truly do NOT appear in the specs but they still appear on ANY IDE HDD you can buy on the market today.  It's an industry standard term recognized by every manufacturer in existance.) The proof is in the pudding.  The other piece of the "current" issue is that modern IDE controllers can usually be set up to a Cable Select arbitration. But IF the OS drivers cannot establish a Cable Select arbitration sequence then the Primary Drive's controller then does all translation services for BOTH HDD's. (Ever wondered why the jumpers were there on the IDE drives to choose between Master and Slave to begin with?) If indeed IDE controllers were completely serial as the partial Wike quote stated why would we need a Master and a Slave to begin with? The Wiki quote is a generalization on "current" devices.  But the IDE spec itself has default backward compatibility back to day one and when Cable Select arbitration doesn't pan out, (and it STILL often doesn't  - Maxtor and HP drives STILL many times will not Cable Select properly and make you end up manually jumpering one to Master and the other to Slave) we immediately get bumped back to "non-current" times.  Then my argument applies. Oh and this applies 100% of the time to Microdrives- ever wondered why manufacturer's can't put two Microdrives on one CF controller?  Because there IS no CS option - it MUST be setup as Master and Slave - and the Master/Slave performance hit makes performance between them so abysmal it's smarter to just add a second CF controller/slot.  Good quote- but only correct within the terms ("current") that it specifies. The best microdrives on the market still are only now adhering to ATA-33 standards. Sad huh? Nothing "current" about Microdrive implementations in a Zaurus.


Um we do know what tracks are the fastest as an OS is almost always installed on a fresh disk, and all disks start writing at the first track regardless of translation and continue writing to consequetive sectors unless it needs to fragment them because other files are in the way - and on a fresh disk there ARE no files to be in the way to start with. So Primary partition one and File one always sit on the first track. Always.

As for the rest of it, as I encouraged readers before- test these performance issues yourself in real world situations. Each and every one I listed is nothing new, very real, and easily repeatably tested and verified.  Old school stuff in the extreme.  Introducing theoretical exceptions are fine, but they are scenarios that are simply not encountered in modern computing.  Performance tests will bear out each and every  issue I discussed.

If my simple claim that performance can be increased by placing the most often used data and swap partitions in Primary partitions is incorrect, then I am in good company, since as of the time of this writing I do not know a single developer nor computer manufacturer or handheld computing device manufacturer  that do not configure their machines in this same manner in order to provide their user base with the fastest possible performance.  Real world performance testing and the Real World implementations of these technologies of the entire computing community support my claims. We may be wrong, but if we are then it's definitely pandemic. =)

Oh well, I'll try to give a more detailed response later when I'm not so tired and my head is clearer. I may just leave it as it is since the topics are so well known and such common knowledge issues that they pretty much stand on their own. They're somewhat like gravity. Nobody likes it, it can be painful as heck, but in the end after you've tested it six-ways-to-Sunday you just can't ignore the fact that it exists.

G'nite All,
-NeuroShock

Bam,
Sorry, almost missed your post. I would think you could put a directory/file on a Swap partition using a similar loopback device as you could with any other formatted partition. It's somewhat a mute argument because the only scenario I could think of wanting to do that would be if you needed more space for storage etc. and you prooobably wouldn't ever have a swap partition to begin with unless you felt you had enough space to justify creating one. Otherwise you would implement a Swap File instead.  Also creating a loopback device on top of a Swap Partition would inflict the performance hit of all loopback devices and therefore degrade the performance of the Swap Partition - and of course the reason for the Swap partition is to increase performance to begin with so that would be counter productive. Cool idea though, and I guess if under normal conditions the loopback device wasn't being accessed it wouldn't inflict hardly any noticeable performance degredation! Nice idea, and sneaky too. I like it. =) You've got a sharp mind bro.
But I assume it IS possible if someone knew how to code it properly (and THAT is WAAAY out of my league!)
Oh also you may wish to update your site with my edited post above rather than the original. I edited for clarity to address the fact that the OS and drivers held the blame for some of the routing issues (primary drives over extended) rather than the IDE HDD controller being solely to blame - as pointed out by Cybersphinx.
Be Well!,
-NeuroShock
Title: Nevermind. It's Been Fixed.
Post by: cybersphinx on December 24, 2005, 09:21:06 am
Quote
There are a few things that jump out at me though. As you pointed out you are definitely correct in the generalization that the IDE controller is blind to partition Priorities as that is indeed handled by the OS and drivers.  I didn't proofread well enough and included that inside of the discussion about physical geometry limitations and head travel speed performance hits etc.  Rather that should have been specified as to being an exterior limiting process- that is indeed a facet that would NOT be a bottleneck of the IDE controller but rather a performance issue related more to the software of the drivers/OS but the overal mechanics are still the same and the priortization problem still exists and therefore the overhead that it creates very real nonetheless.
OK, so I read that wrong - but I still think there is no performance impact in using extended/logical partitions instead of primary ones, at least not in principle. There might be operating systems (DOS etc. comes to my mind here) where access to a logical partition has to wait for access to a primary partition to be finished. I am pretty sure Linux isn't that braindead.

Quote
Um also I stand by the Extended partition track statement.  Since as far back as 1992 I cannot remember ever finding a single computing device that used an IDE HDD device with an extended partition on track one. With modern linux/unix variants I suppose it is possible but because of the overhead noone would intentionally do this. Yes, you might gain a meager speed boost by having an extended partition on track one just so that you can say you did it but you'll lose even more performance by introducing a Logical partition in that position for all the reasons I already stated. Theoretically yes. Real world - never. If you were correct and this would fix the performance issues it would be found everywhere.
Well, I think people tend to stick with what works, and thus some myths (which might once have been true) live far longer than they should. I don't have a microdrive (except a broken one) at the moment, else I'd test it: Make the whole drive one partition, and then test in every way I (or others) can think of. I am pretty sure there will be no difference in performance. Hm, perhaps I can test this with an old computer here when I find the time. If you want me to test something specific, just tell me.

Quote
Oh and the key word from your quote from the wikipedia is "current".  Really- try data transfers from Master to Slave vs Master to Master yourself. (oh and btw Master and Slave truly do NOT appear in the specs but they still appear on ANY IDE HDD you can buy on the market today.  It's an industry standard term recognized by every manufacturer in existance.) The proof is in the pudding.  The other piece of the "current" issue is that modern IDE controllers can usually be set up to a Cable Select arbitration. But IF the OS drivers cannot establish a Cable Select arbitration sequence then the Primary Drive's controller then does all translation services for BOTH HDD's.
But cable select is just to determine the master and slave roles depending on where on the cable the drive is connected, instead of using jumpers to do it. If the cable select doesn't work, then the master/slave roles will be undetermined and it's pure luck if it works. When cable select works, then the drive on the master plug of the cable will be the master, the same as when it's jumpered to master.

Quote
(Ever wondered why the jumpers were there on the IDE drives to choose between Master and Slave to begin with?) If indeed IDE controllers were completely serial as the partial Wike quote stated why would we need a Master and a Slave to begin with?
To distinguish the two drives, to give them addresses. The same as with SCSI, where every device needs an ID to be addressed.

Quote
The Wiki quote is a generalization on "current" devices.  But the IDE spec itself has default backward compatibility back to day one and when Cable Select arbitration doesn't pan out, (and it STILL often doesn't  - Maxtor and HP drives STILL many times will not Cable Select properly and make you end up manually jumpering one to Master and the other to Slave) we immediately get bumped back to "non-current" times.  Then my argument applies. Oh and this applies 100% of the time to Microdrives- ever wondered why manufacturer's can't put two Microdrives on one CF controller?  Because there IS no CS option - it MUST be setup as Master and Slave - and the Master/Slave performance hit makes performance between them so abysmal it's smarter to just add a second CF controller/slot.  Good quote- but only correct within the terms ("current") that it specifies. The best microdrives on the market still are only now adhering to ATA-33 standards. Sad huh? Nothing "current" about Microdrive implementations in a Zaurus.
OK. Perhaps the IDE implementation in the Zaurus is not current, and I don't know the specifics of CF cards etc.

Quote
Um we do know what tracks are the fastest as an OS is almost always installed on a fresh disk, and all disks start writing at the first track regardless of translation and continue writing to consequetive sectors unless it needs to fragment them because other files are in the way - and on a fresh disk there ARE no files to be in the way to start with. So Primary partition one and File one always sit on the first track. Always.
Well, I was a bit confused about the track issues when writing (and you also have it slightly wrong). I guess we can agree on the following: Regardless of where the tracks are actually placed on the platter, what the OS sees as first track is on the fastest area on the disk (doing it otherwise would be possible, but that's a rather theoretical point).

Quote
As for the rest of it, as I encouraged readers before- test these performance issues yourself in real world situations. Each and every one I listed is nothing new, very real, and easily repeatably tested and verified.  Old school stuff in the extreme.  Introducing theoretical exceptions are fine, but they are scenarios that are simply not encountered in modern computing.  Performance tests will bear out each and every  issue I discussed.
"Old school" - sure there are no myths or traditions in between?

Quote
If my simple claim that performance can be increased by placing the most often used data and swap partitions in Primary partitions is incorrect,
I think it is. Like I said, I don't have a microdrive here, only some older parts to build a testing computer from (but that'd need some time).

Quote
then I am in good company, since as of the time of this writing I do not know a single developer nor computer manufacturer or handheld computing device manufacturer  that do not configure their machines in this same manner in order to provide their user base with the fastest possible performance.  Real world performance testing and the Real World implementations of these technologies of the entire computing community support my claims. We may be wrong, but if we are then it's definitely pandemic. =)
I think it's tradition to put the system in a primary partition, since that's the only thing DOS supported (and the first tracks are usually the fastest). Else why'd you bother with logical partitions at all (if you don't need more than four partitions)? Just make some primary partitions instead of the logical ones for non-system drives.

Quote
Bam,
Sorry, almost missed your post. I would think you could put a directory/file on a Swap partition using a similar loopback device as you could with any other formatted partition.[div align=\"right\"][a href=\"index.php?act=findpost&pid=107681\"][{POST_SNAPBACK}][/a][/div]
Not possible. A swap partition has no file system, so you can't put files on it, and you'd need a file for a loopback device.

cybersphinx
Title: Nevermind. It's Been Fixed.
Post by: bam on December 24, 2005, 10:29:24 am
hmmm, bummer, was hoping to set up a swap partition but I believe that the z needs the files on hdd1 to operate properly, but I will double check.
Title: Nevermind. It's Been Fixed.
Post by: neuroshock on December 25, 2005, 05:54:28 am
Quote
hmmm, bummer, was hoping to set up a swap partition but I believe that the z needs the files on hdd1 to operate properly, but I will double check.
[div align=\"right\"][a href=\"index.php?act=findpost&pid=108374\"][{POST_SNAPBACK}][/a][/div]

     He's incorrect concerning the Swap Partition. A Swap Partition must have a format like any other partition even though it's rudimentary in comparison. Even if the partition only has one file as he is inferring the file must still be addressable. If indeed the scenario you were hoping for was as you mention above then you are still correct that it won't work even though Cyber is wrong. IF you could install the file at all it would only be through a loopback device and loopback devices do not get initialized usually until much too late in the boot process to accomplish your goal. Even when they do get initialized they of course are given a unique device identification by the OS and since it cannot be hda1  because that identification would already be taken it would never work anyway.  From outside the folder before the loopback is initialized the files in question would just be jibberish.  Still sharp thinking to have contemplated the idea to begin with but in the end....still bummer. You're right it would have been a cool way around the Partition situation on the C3100.

     As for the rest of Cyber's replies - Please. Do us all a favor and grab C3100, C3000 (whichever you own is fine) and a  3, 4, or 6gb Retail Hitachi Microdrive and make yourself a script to do thorough benchmarking.  Then take your script, process, and results and post them here so that others can see your results and can easily verify that your testing process has integrity and that your results are duplicatable. I know you truly honestly believe I am completely off my rocker with every claim I've made and that all of the reasons that the computer industry still does these things is because we are mired in tradition and habit. But until you can illustrate otherwise there's no point in replying to every patient reply and explanation I make that you once again "think" I'm wrong. I'd love for you to catagorically prove me wrong and in doing so allow me to get ahold of the performance gains for myself that you say can be had by placing our most used partitions on loopback devices located on extended partitions that themselves sit on logical partitions with a physical location late on the physical drives geometry.

     Besides - if your benchmarking bears your claims out then I won't have to rework the awkward partitioning strategy that Sharp has forced upon me once we know how to get rid of the dependence on the first two vestigial partitions - if it turns out that way I'll already HAVE the optimal performance setup and can relax that I'm already as optimized as I can be. But you won't. About the best result you can hope for is to demonstrate while your way DOES cause a performance hit, it is minimal enough that most people aren't heavily affected. But as for me - every little bit of performance that I can squeeze out of my Zaurus is worth the effort. Especially when its just a matter of partitioning.

     Throughout this thread I have very carefully replied with post after post patiently and as clearly as possible I have explained how these issues directly affect Real World performance on our Zaurii as well as gone into enough detail that anyone could follow the history, development, implementation, and Zaurus specific software/hardware issues that are at the heart of the problem. My explanations are sound, well presented and reflect my own experience as a robotic engineering technician in a manufacturing facility that did (and still does) fabricate proprietary commercial computing devices that are centered around ATA IDE HDD's and the experience of the engineers that designed these devices.  We worked directly from the recommendations that were presented to us by engineers from IBM, Seagate, and Quantum who actually designed the drives themselves. The claims I've made are also validated by EVERY associated manufacturer in the last decade.

     Despite all this you still believe that I'm absolutely - almost catagorically wrong in regards to each and every one of them.  Believe it or not, I actually don't have a problem with this at all – you are of course entitled to your opinion.  What I DO have a problem with is that members with less experience will inevitably stumble across this post and in their desire to better the performance of their Z's they may be tripped up into being completely misled by someone who simply disagree's based upon untested theories and who offers a vigorous argument to the contrary and singlehandedly stands in opposition to every manufacturer, engineer, and benchmark that has been established concerning IDE ATA-33 device interface since its inception as well as everything currently known in the community about how the device is integrated with Zaurii. This runs completely counter to the reason that these forums exists - we are here to help other users by sharing proven facts and realistic solutions to Real World problems. If it were not for this I would have quit posting in this thread quite a while ago, but it's the junior members who'll be hurt by misleading claims not the senior experienced ones and I remember all too well how often I was tripped up by similar sincere but incorrect claims during that steep learning curve we all went through as a new Zaurus owner!

     So please, before you repost time and time again on a subject matter that is well established in the community that you disagree with please take the time and make the effort to do the benchmarking, find Engineers, Technicians, Manufacturers, personal life experience, or SOME form of factual evidence to support such a broadsweeping claim that everyone in the industry is wrong because they are myopic and mired down in tradition and historical "habit".  It's one thing to make a claim against a theory or idea someone is presenting that is yet unproven. It's quite another to single-handedly defy every bit of conventional wisdom regarding a well documented and researched field and then back it up by saying that you know for sure that they are wrong and backing yourself with a theory you've postulated but never tested even for yourself.

     You may be right. You very well may indeed be the next "David" that slays the tradition-bound "Goliath" with his theoretical "sling".  Just remember - David couldn't have done it without a stone. Get your Zaurus, Get your Hitachi Microdrive, Make your scripts, do your benchmarking, post your process and controls and then your results so they can be checked for integrity and found duplicatable.  Alternately find a Manufacturer, a IDE Design Engineer, or SOMEONE from a creditable background that will verify your claims in each of these areas and have them present evidence that as a whole the industry has been sincerely misguided in their conclusions. But until you can and do, we'll just have to go with the hardearned knowledge we currently have.

     To YOU the reader of this thread and post - as I urged in my first posting. If you wish to learn more concerning these issues you can find much of it right here on the forums just by doing simple word searches. Bettery yet choose one of the several techniques that are also well documented on the forums here to test drive and processor/system performance in the areas related to the issues in this thread.  Share your process with others here on the boards so they can also help to check and make sure your process was clean and then let the results speak for themselves.

      I've exhaustively and as clearly and as patiently as possible explained, defined, and clarified every aspect and angle of the performance issues I originally presented. If I have not been clear enough or if my explanations lacked logic or reason then I present my sincerest apologies to the community here. I have honestly given the best effort possible to illuminate the facts and present factual, reasonable evidence on this subject. Regardless I offer it freely to all with the hopes that some may benefit from it.  I was quite excited at the beginning of this post when the opportunity presented itself for me to be able to share pertinent information that would actually help others in a real and tangible way. I am very weak in so many other areas (programming, cross-compiling, etc. etc. etc.) that it felt great to be able to offer something back to the community that I take so much from.  

     That feeling has been quite extinguished at this point. I find myself completely demoralized by Cybersphinx's reply's as they seem more aimed at denouncing the legitimacy of my claims than they are in finding evidence to the contrary and facts to back it up. If I cannot present facts in such a manner as to convince one person of evident truth's concerning such a well founded topic matter as this then I obviously should bow out of this issue entirely and let the facts either speak for themselves or let someone else speak for them. This will be my last post in this thread.  I didn't expect this thread to devolve to a "I'm right"/"You're wrong" fest.  I'm not into the verbal combative posting thing so I'm calling it quits before it boils all the way down to a flaming match.

     I think I'm gonna drop back to being a "quietly watch from the sidelines" member and quit publicly posting altogether, It’s just not worth an argument over the most fundamental facts imagineable.
HOWEVER:Thanks to everyone who participated in a positive way in this thread and gave meaningful information that furthered it. As it managed to pull a LOT of good information forward I still cannot make myself regret having started it. However having said that, there is ZERO chance of  any further posting from me on these forums. I lack any desire whatsoever to spend as much time as I do in my posts in the attempt to make sure they are clear, well researched, and comprehensible just to have it all drowned in pointless confrontation. I realize there are many members who are more knowledgeable and much more brief and yet also still more concise in their posts but I do the best I can. I'm finally starting to understand the frustration and reluctance to post in length that the developers and more knowledgeable members feel. All too often their posts immediately become riddled with reply's from ppl who are disagreeing just for the sake of disagreement and producing theoretical but completely impractical evidence as test cases to prove their stance. Any good that can possibly come from my posts is far outweighed by the diatribe that must be endured as a result.

I come away from this post feeling like I just told my best friend that I believed the world to be round rather than flat.  I know my theory of the world being round sounds totally crazy, but If one more person tells me I’m incorrect and that it truly is flat then I’m going to run and jump right off the edge of it just to get away from them!

Whoever wants the last word can have it. You've already had mine.



Be Well My Friends,

-NeuroShock
Title: Nevermind. It's Been Fixed.
Post by: adf on December 25, 2005, 01:17:05 pm
Merry christmas to you too  

I just can't refrain from pointing out that partitioning is a non-issue in pdaxrom.  ATM my 3100 (incompletely setup since the feeds are down and I need hostap ipks) internal md has 1 ext3 partion mounted at ide (so long as I don't boot with a card/drive in the cf slot) and 1 128 meg swap partition. nothing else. works great.

If you are really that concerned about the partition issues on the 3100, there is a simple solution.
Title: Nevermind. It's Been Fixed.
Post by: Meanie on December 26, 2005, 12:09:54 am
The C3100 really does not need /hdd1 and /hdd2. Only rc.rofilsys want to see them because Sharp just copied it from the C3000 which really needed them. So all you need to do is hack rc.rofilsys and remove the annoying dependancy. The tiny files in /hdd1 and /hdd2 are used only for recovery, ie when you do a factory reset and it wipes your /home and /hdd3 it uses those files to regenerate the directory structure and put sample files back but you really dont need those samples since they are japanese templates, etc.. nothing you would really miss.
Since I hate this factory reset feature anyway, I hacked it not to wipe hdd3 and not use those files.
Title: Nevermind. It's Been Fixed.
Post by: loc4me on December 26, 2005, 11:53:09 pm
Stop renaming the topic, editing posts that were written several days ago and then later removing them. It is really annoying. Dont say things in the first place if you find you are wanting to remove your post afterwords. The converstations between Neuroshock and Cybersphinx are helpful in revealing misconceptions. Please dont remove useful information.
Title: Nevermind. It's Been Fixed.
Post by: neuroshock on December 27, 2005, 08:18:05 am
Quote
Stop renaming the topic, editing posts that were written several days ago and then later removing them. It is really annoying. Dont say things in the first place if you find you are wanting to remove your post afterwords. The converstations between Neuroshock and Cybersphinx are helpful in revealing misconceptions. Please dont remove useful information.
[div align=\"right\"][a href=\"index.php?act=findpost&pid=108567\"][{POST_SNAPBACK}][/a][/div]


Thank you. I was wondering about that myself. Irritating, I had to go back through my email just to find this post now. I thought the author of the initial post was the only person (other than the moderator) who could change thread names.At this point I'm either assuming the Mod has chosen to do this for some reason or someone knows something that I don't.  Either way I plan on copy/pasting EVERY post (yup even the ones I find irritating) and post them on my website and create a link to it in a new thread in case anyone needs/wants to refer to it in the future.

Imho, no-one's posts should be edited by anyone other than the Moderator or him/her self (the author) for any other reason than clarity or accuracy issues with exceptions only given to extreme racial/cultural/personal slander as we've seen unfortunately from time to time. While I disagree with him Cybersphinx and myself have both brought up valid points and the whole point of a forum is to be able to share and find information, facts, and opinions on topics and altering them without notifying the reader in the post or making the post difficult to find is a disservice to the community.

-NeuroShock
Title: Nevermind. It's Been Fixed.
Post by: speculatrix on December 27, 2005, 03:52:42 pm
Quote
Imho, no-one's posts should be edited by anyone other than the Moderator or him/her self (the author) for any other reason than clarity or accuracy issues with exceptions only given to extreme racial/cultural/personal slander as we've seen

I think it's find to move a thread between topics.

And people SHOULD change the topic in a for-sale when the item's been sold.

Otherwise, I agree
Title: Nevermind. It's Been Fixed.
Post by: PaulBx1 on March 31, 2006, 01:37:56 pm
I found this interesting thread when poking around. On the controversy between neuroshock and cybersphinx, I have to say I come down mostly in the latter's camp.

I have spent a lot of time working on things like disk I/O. The last thing I did before retiring, was working as a system engineer for Sequent (which IBM swallowed), testing and troubleshooting fiberchannel storage area networks. Sequent was always at the top of the charts for benchmarking on really big systems, so we had to spend a plenty of time worrying about performance. So we did do a lot of benchmarks, and worked constantly with disk manufacturers. I spent many an hour sitting in front of a fibrerchannel analyzer looking at I/Os and figuring out why they were not working, or working too slow. Fiberchannel of course implements a scsi-based protocol for disk storage, so what I say may not apply completely to IDE, but a lot of it will.

The line neuroshock was taking actually was more true in the old days when systems and storage was simple, when people could understand the whole picture and when they had the tools to lay things out so that performance could be enhanced (and they needed to, because those systems were slowwww). But those days are long gone. I/O is extremely complex now, the systems and drives and I/O controllers all do so much optimization and reworking of the command stream that there is very little control over what can be done or how it can be optimized.

Just to give an example, this notion of putting stuff inside or outside of a drive platter, having an effect on performance. First, it is laughable because of all the remapping that goes on in the disk controller. You never really know where stuff is physically going on the platter any more; only the firmware writer knows that. And even if there were no remapping, you still couldn't know for sure! I remember way back, when testing a system where there was no remapping, where the performance was better with data coming off the slow part of the disk, or maybe it was with a slower disk compared to a faster one, can't remember exactly. Here's how that happened:

Those old drives had simple, smallish memory buffers interposed between the platter and the cable. When the DMA speed on the cable is closely matched to the data speed coming off the platter, the transfer proceeds at top rate. However when the data comes off the platter faster, what happens? The transfer goes slower! It does this because the data fills up the buffer, and then has to stop entirely, losing a rotation before starting again. So your transfer is full of lost rotations by the platter, which ends up being slower overall than if the two speeds were matched. If I could have slowed that platter rotation speed down, I could have sped up the transfer rate!

Now, who knows if this holds any more (probably not - buffers might be big enough to swallow the whole file in one gulp - many read requests never actually go to the platter because the file is still in the buffer). But the point here is that there are SO MANY FACTORS affecting I/O performance, it is virtually impossible (outside of a few very simple things like preferring a 10x sd card to a 4x one) to predict with any assurance that the machinations you are going through are going to do any good.

So just use your hardware and don't get yourself too torqued about extracting the last bit of performance out of it. The firmware and OS writers have taken care of that. "Don't worry, be happy!"