Author Topic: Nevermind. It's Been Fixed.  (Read 12713 times)

neuroshock

  • Sr. Member
  • ****
  • Posts: 269
    • View Profile
    • http://
Nevermind. It's Been Fixed.
« Reply #15 on: December 18, 2005, 01:15:25 am »
Quote
A). Thank you very much!!

. Warranty....hmmmm  maybe I should keep it in one piece while under warranty..
or Tom61 gets his mods goin....
C).  If I decide to delay dissection, maybe I should use a usb wifi dongle?   I ordered a D-link dwl-122 any guesses on getting it to work and putting the 6-gig in the cf slot (for a 101-gig z)?

D)Then again... it SOUNDS easy enough... ( I'm one of those people who tends to have a "mystery" part left over after disassembling and reassembling somthing)



On the partition issues...  If you pulled the drive from a 3100 wouldn't it just be a 1000? That is to say, won't it run from flash?  (say with something far from sharp, like pdax or gpe?) If so, would't the idea be to set a basic OS up in flash and just partition the thing?
[div align=\"right\"][a href=\"index.php?act=findpost&pid=107453\"][{POST_SNAPBACK}][/a][/div]

Great question Adf.  

If I'm understanding things correctly yes....and no.  If you search you will find a few C1000 users that have attempted to reflash their models to a C3100 with a mixed bag of success.  Their basic attempt being of course to get any benefits of the newer ROM/applications.  It seems to be the general concensus that the end result isn't worth the effort.  As far as I know nobody has tried to flash a C3100 back to a C1000 but more than a few ppl think it might be feasible. There are apparently a few minor issues that make the C3100 dependant upon the microdrive.  Just for curiosity sake I took a few minutes out to try booting my C3100 in both the standard Sharp ROM and Cacko without any microdrive whatsoever- no luck.  As soon as it detects it's not there it freezes the boot sequence and I can't get it to continue doing a Cntrl-c or any other method I know of.  However if it were flashed with a C1000's ROM I dunno. It might?
The only reason I could ever imagine trying that would be if I either ruined my microdrive and wanted to salvage it from one perspective or another, or if I decided to populate the interior CF slot with a Bluetooth or WiFi card. Hopefully the first won't happen. Ever.

Oh sorry forgot about your PM till just now about the dongle as well.  I'm using dongles for everything except for WiFi and Modem. So I don't know what WiFi dongles will work yet. I was referring to my bluetooth dongle in the PM I sent you which I discussed the dongle thing about. Incidentally I DO have CF bluetooth cards to try as well and would like to get a WiFi dongle so if your Dlink works well let us/me know!  That way I can use whatever peripherals etc. either in CF or usb interchangeably depending on what I finally decide works best.

-For blue tooth dongles I'm using a cheap-o IO gear off of ebay, a billionton w/antenna, and I also have a belkin dongle that I haven't tried yet but the first two work well.
-For 10/100 Ethernet I have the light blue one that is the same as Meanie has (look on his website for more info.)
-My Pretec CF modem works well.
-The WiFi card I'm using is a Dlink DCF 660W. I tried my Pretec card but had no luck with it yet.  I think I saw directions for how to get it working with Cacko however- just haven't gotten far enough to get that done yet.

Umm I think that's all I've tried to date.  All those that I mentioned that I know work are working on both Cacko 1.23 and the Sharp Rom.  Looks like I'll be staying with Cacko on my C3100 so I'll be donating to them soon. =)  It's nice. REALLY nice.

Hope this helps Adf,

-NeuroShock
[span style=\'font-size:8pt;line-height:100%\']SL-6000L & C3100.[/span]

neuroshock

  • Sr. Member
  • ****
  • Posts: 269
    • View Profile
    • http://
Nevermind. It's Been Fixed.
« Reply #16 on: December 18, 2005, 09:08:39 am »
Okay folks,

The hard part is done. I borrowed bits and pieces from everywhere and had many aborted attempts that didn't work out. However I have found a setup that I am content with for now. I must confess it doesn't actually address the issues of freeing up the two first primary partitions that are wasted partitions and wasted space on a C3100.  Even further I avoided the scripting problems of the first three drives by simply using a partitioning scheme that worked without altering them.  Eventually this needs to be permanently addressed but my personal skillz are not up to the task.

Bam and Frobnoid were not only the primary responders but eventually also the sources of most of the solution. A special thank you to both of you, and I've decided to divide the money between the two of you equally.  Please either PM me, or leave me a message in this thread as to what PayPal address you wish me to send the money to.  Frobnoid, if your insistent on not accepting the money please just choose a Zaurus developer that you think would deserve and benefit from the gift instead. (Such as Cacko, PdaXrom, Guylhem etc. just to name a few.) I'm sure they will be more than happy to see a few extra dollars roll in to keep the projects going.

Here's what I ended up with:

I replaced the interior 4gb Hitachi Microdrive with a new Retail version 6gb Hitachi Microdrive, model # HMS360606D5CF00.

My final fdisk partitioning scheme is:

Device           Start    End     Blocks          Id     System  
/dev/hda1      1          2        16033+         83    Linux             Primary 16mb ext3
/dev/hda2      3          4        16065           83    Linux              Primary 16mb ext3
/dev/hda3      5          521    4152802+     b      Win95FAT32    Primary 4.05gb FAT32
/dev/hda4      522      746     1807312+     5      Extended        Logical Drive
/dev/hda5      522      680     1277136       83    Linux              Extended 1.25gb ext2
/dev/hda6      681      746     530113+       82    Linux Swap     512mb Swap Partition

This exact setup neatly bypasses the auto-mounting issues with the first three drives. I have also seen no problems with my ext2 drive that I use for installing large programs and subsets such as X/Qt etc. The Swap Partition does indeed work with it in an extended partition and has exhibited no apparent problems either.

Incidentally while testing and developing this strategy I found that I could use either Fdisk or Partition Magic 8.0. I actually wish I had thought about it sooner. I simply plugged the 6gb Microdrive into a CF to 40pin IDE adapter I bought off of ebay about a year ago, (normally used to use CF flash memory in embedded project devices), and plugged the adapter into a 40pin ribbon cable on one of our desktops. Then I reset the cmos to boot from CD and voila! Resizing/sizing the partitions then became a piece of cake and Partition Magic even handled the formatting of the partitions for me. I did also try the 4gb CF card with the intent of then just copying the partitions from one Microdrive directly to the other but had no success because the reported geometry of the 4gb drive does not match it’s actual geometry and therefore Partition Magic won’t let you alter anything on it or read any data from it.

I likewise found several alternatives for moving the existing data from the original partitions to the new ones. I used dd, cp, with equal success and then later tried installing the Cxx00 version of TheKompany.com’s tkc-explorer.  I didn’t think until later that since Cacko already had Midnight Commander installed by default I probably could have used it as well. The bottom line here is that there is no secret to getting the data moved, just make sure you get it all moved and that you preserve any directories and symbolic links if there were any programs installed.  In my case I had no programs installed on the embedded hard drive as I’m starting at ground zero with a clean install of Cacko.

The above procedure will get you a clean boot without any reported errors and with the first three drives mounting fine and operating normally (They’re just bigger now.) I would suggest flashing/reflashing with Cacko (again if you’ve already installed it) if your planning to use Cacko after you install the larger Microdrive just to make sure that Cacko can freely make any changes/symlinks/etc. to it that it desires. It would truly suck to do all this work and then later find something was broken after a few hundred hours of loading programs and joggling settings etc.

Things yet to be done:

I still need to edit the scripts a bit to get the last two drives auto-mounted but now I can use a direct copy of the scripts posted by Bam on his site to accomplish that. (This was one of the reasons I put the swap partition last instead of directly after the FAT32 partition. If the Swap partition were closer to the center of the disk my performance would be a bit better when the swap file was being heavily used but this whole setup was a compromise and this one seemed a prudent one to keep me from having to make blind edits to the script.)

For now I’m manually mounting the extended partitions and manually turning the swap drive on via Swapon Swapoff.  Between editing the script to match Bam’s and a quick entry or two in another file I should have the Swap Drive automounting as well.

While this solution was neither elegant nor what I truly wanted it is the only way currently that I have found to meet my basic criteria. But I’ll be satisfied with this as it indeed preserved my ability to use my C3100 as a Windows USB storage device, gave me the ability to install programs to a native ext2/3 partition without the performance hit of a loop device being implemented, as well as providing the extra “memory” the Swap drive provides while offering better performance than I could have achieved with a Swap File. No major performance penalties. Good enough.

For the future:

I’d really like to see someone directly address the lame vestigial primary partitions.  This would free up a bit of space and allow for more advanced and flexible partitioning schemes in the future. A bit of performance would also be gained if Extended partitions were not our forced solution for the two most heavily used partitions on the actual drive, (the ext2 drive that programs will run off of and the Swap Drive.). The same can be said about reliability that is said about performance.  It’s not optimal as it is – not in the least.

I admit this is a “crutch-fix” meant to limp me and others like me that may go down this path along until someone can more thoroughly handle it permanently later on.  It is simply the best that I could do at my skill and knowledge level and it took a bit of growth and “reaching” for me to even come this close. I’m not the brightest bulb in the package. =\ But I’m stubborn.

I hope this helps others who may suffer from a similar lack of skillz but who still wish to upgrade their drives in the meantime.  I’ll try to post back to this thread any changes to scripts or files that need to be made to ultimately finish the job off so that everything is automated.

Thanks All!,

-NeuroShock
« Last Edit: December 18, 2005, 09:18:01 am by neuroshock »
[span style=\'font-size:8pt;line-height:100%\']SL-6000L & C3100.[/span]

bam

  • Hero Member
  • *****
  • Posts: 1213
    • View Profile
    • http://thegrinder.ws
Nevermind. It's Been Fixed.
« Reply #17 on: December 18, 2005, 10:36:18 am »
Quote
Quote
But on a 6gb drive you can only have a total of four primary partitions, or three primary with extended. (as Bam already pointed out.) This is preclusive as both the swap partition and whatever you choose for partition 3 both will also demand to be Primary. You would then have four primary partitions and that prohibits any logical ones- and you would have to have at least one more to meet my requirements.

I'm not aware of any requirement that swap be a primary partition. With that said, I don't know that I've ever tried swap on an extended partition. If you've seen it stated as a requirement, can you point me at such a reference? (I'm always interested in learning something new)

Quote
Also this just addresses the partitioning. I also need to know how to properly copy the data from the old 4gb partitions to the new 6gb ones.
I've been told to use the dd command.  I'm totally disknowledgeable in it's use.  Can you enlighten me to its use? Or offer a better alternative?

The following should be sufficient (as would bam's suggestions):
With the 4GB drive in place:
dd if=/dev/hda1 of=~/hda1.dd # read all data off partition 1, store that data in "hda1.dd" in your homedir
dd if=/dev/hda2 of=~/hda2.dd # read all data off partition 2, store that data in "hda2.dd" in your homedir

With the 6GB drive in place, properly partitioned:
dd if=~/hda1.dd of=/dev/hda1 # read from ~/hda1.dd and output it to the new partition 1
dd if=~/hda2.dd of=/dev/hda2 # read from ~/hda2.dd and output it to the new partition 2

dd reads/writes the raw bytes from the filesystem ("if" is "input file", "of" is "output file").

If you put both disks in the Z at once, you should be able to do the dd directly without storing to the internal flash by doing:
dd if=/dev/ORIGINAL1 of=/dev/NEW1
dd if=/dev/ORIGINAL2 of=/dev/NEW1

where ORIGINAL is the location of your 4GB CF and NEW is the location of your 6GB CF. "hda" is the one internal to your unit. I don't whether the other will be "hdb" or "hdc" (If you have no SD card in, run "df -k" from the prompt, and you'll see which of the two is in use...)
[div align=\"right\"][a href=\"index.php?act=findpost&pid=107446\"][{POST_SNAPBACK}][/a][/div]


cool, I think I like the dd method better....(thought it would be more complex, but not really after seeing it(not much experience with dd[except swap creation])
SL-C3100 current: Stock/Tetsu 18h
Socket BT CF Card
Linksys WCF-12 802.11b/Cheapie USB Ethernet

The Grinder

Cresho

  • Hero Member
  • *****
  • Posts: 1609
    • View Profile
    • http://home.earthlink.net/~cresho/
Nevermind. It's Been Fixed.
« Reply #18 on: December 18, 2005, 11:26:56 am »
i was about to suggest partition magic 8 and also the cf slot drive..... been doing it for  years on my z
Zaurus C-3200 (internal 8gb seagate drive) with buuf icon theme, cacko 1.23 full,  and also Meanie's pdaxqtrom-Debian/Open Office
Zaurus SL-5500 Sharp Rom 3.13 with steel theme
pretec pocket pc wi fi
ambicom bt2000-cf bluetooth-made in taiwan
simpletech 1gb cf
pny 1gb sd
patriot 2gb
ocz or patriot 4gb sd(failed after 2 weeks)only on z
creative csw-5300 speakers in stereo
DigiLife DDV-1000 for video, Audio, Picture recording playable on the zaurus
Mustek DV4500-video recorder, pictures, voice record on sd for z

zaurusthemes.biz | ZaurusVideo | Zaurus Software

raybert

  • Full Member
  • ***
  • Posts: 233
    • View Profile
Nevermind. It's Been Fixed.
« Reply #19 on: December 18, 2005, 03:58:54 pm »
Quote
... gave me the ability to install programs to a native ext2/3 partition without the performance hit of a loop device being implemented....

... A bit of performance would also be gained if Extended partitions were not our forced solution for the two most heavily used partitions on the actual drive, (the ext2 drive that programs will run off of and the Swap Drive.). The same can be said about reliability that is said about performance.  It’s not optimal as it is – not in the least. ...[div align=\"right\"][a href=\"index.php?act=findpost&pid=107465\"][{POST_SNAPBACK}][/a][/div]
I question the theoretical "performance hit" of these two scenarios, as well as whether their reliability is suspect.

As for extended partitions, I would think that any extra work to access these would be done at mount time.  Once they are mounted and the drivers know the addresses of the partitions I would expect there to be zero performance impact.  Why do you think differently?

As for loopback, the magic that takes place there is done in software and in RAM: I would expect that there are no additional device interactions that take place.  I'd bet that any performance impact would be hard to measure, much less sense on a human level.  Again, why do you think differently?

I think the same could be said about swap files vs. swap partitions.  (In fact, I would not be surprised to find that loopback is used to implement a swap file.)  I doubt you'd really experience any difference in performance.

And lastly, I see no reason for there to be any kind of reliability hit with either of these.  If they work, they work.  What would make them any less reliable than other solutions?

The one thing about your set-up that would bother me is the two "vestigal" partitions.  They do no harm except wasting some space, but it's somewhat ugly to have to keep them.  I would expect though that this can be fixed easily if it truly is only scripts that control initialization.  OTOH, if Sharp stuck something boneheaded in their proprietary code, you'll probably be living with this for a while.

Anyway, glad to see you got your system working.  Good luck with it.

~ray
« Last Edit: December 18, 2005, 04:00:51 pm by raybert »

adf

  • Hero Member
  • *****
  • Posts: 2807
    • View Profile
    • http://
Nevermind. It's Been Fixed.
« Reply #20 on: December 18, 2005, 04:06:44 pm »
Quote
OTOH, if Sharp stuck something boneheaded in their proprietary code, you'll probably be living with this for a while.

This is why I suggested pdaxrom or OZ for the experiment. I bet they did just that.
**3100 Zubuntu Jaunty,(working on Cacko dualboot), 16G A-Data internal CF, 4G SD, Ambicom WL-1100C Cf, linksys usb ethernet,  BelkinF8T020 BT card, Belkin F8U1500-E Ir kbd, mini targus usb mouse, rechargeble AC/DC powered USB hub, psp cables and battery extenders.

**6000l  Tetsuized Sharprom, installed on internal flash only 1G sd, 2G cf

polito

  • Jr. Member
  • **
  • Posts: 77
    • View Profile
    • http://thether.com
Nevermind. It's Been Fixed.
« Reply #21 on: December 18, 2005, 05:24:44 pm »
Actually, the two 9MB partitions aren't useless. If you've ever done an 'ls -la' on them you'll find that they've got a .sys folder in them which has some tarballs.

From what I recall they're used when the system boots into the backup/restore system and launches some sort of rudimentary operating environment which only handles the backup and restore so it can get a reliable backup/restore without files being in use, etc.

Please note that what I say here is pieced together from my rather interesting memory and having scanned a few posts about them somewhere that I can't remember. But the main thing is that the partitions do have a use and I can't remember whether or not the special areas get the .sys folder and other tarballs recreated in them or not.

Just figured I'd throw my 50 cents in  I do agree that it's rather lame to have goofy baby partitions... perhaps it's something like the PC Bios limitation on accessing files above 1024 cylinders in order to boot? I remember having to create small 16mb /boot partitions to hold nothing but the kernel and some other system files so that LILO could boot linux. I don't know if there's something similar with ARM or not. Maybe sharp just figured two little required partitions would be the only way they could guarantee that certain system things would just be there and they wouldn't need to worry about it *SHRUGS*

neuroshock

  • Sr. Member
  • ****
  • Posts: 269
    • View Profile
    • http://
Nevermind. It's Been Fixed.
« Reply #22 on: December 19, 2005, 01:26:32 am »
Quote
Quote
... gave me the ability to install programs to a native ext2/3 partition without the performance hit of a loop device being implemented....

... A bit of performance would also be gained if Extended partitions were not our forced solution for the two most heavily used partitions on the actual drive, (the ext2 drive that programs will run off of and the Swap Drive.). The same can be said about reliability that is said about performance.  It’s not optimal as it is – not in the least. ...[div align=\"right\"][a href=\"index.php?act=findpost&pid=107465\"][{POST_SNAPBACK}][/a][/div]
I question the theoretical "performance hit" of these two scenarios, as well as whether their reliability is suspect.


Great questions. (I'd expect nothing less mind you!) With some fairly easy answers with real world arguments--- and some that are more founded in my pet peaves than in huge performance hits.  Honesty does us all good. =)

Please bear in mind up front that this partitioning configuration scheme is helpful to me because I will be using extremely demanding X software via X/Qt that demand more system resources than the C3100 can normally give.  This heavy usage greatly amplifies the performance hits that I would take as a result of these issues than compared to a casual user or someone who only uses Qtopia based programs written specifically for the Zaurus.

So here we go-

As for this first question this is simple. The main performance hit comes from drive geometry issues and hardware performance from a hard drive's perspective more than the CPU or software, however the first portioin I'll discuss are on the CPU and Software end of things. Data being delivered to drive partitions are routed by priority. (Much like irq's establish priority for devices recieving the CPU's attention and therefore bandwidth.) So Primary partitions recieve primary routing. Extended partitions must take back seat to Primary partitions when routing conflicts occur.  And they occur a LOT in IDE implementations.  Further, Extended partitions are just that- extended partitions that are extended FROM a primary partition.  Actually it would be more accurate to say they are extended THROUGH the Primary partition.  For Extended partitions, not only do all routing calls have to be delivered through the Primary partition, but by definition extended partitions sit on a Logical partition also.  As you were inferring about Swap Drives possibly being a loopback device (and we'll address that shortly) the extended partitions are somewhat of a similar loopback device system that sits upon a Logical (in this case meaning "doesn't really have a physical address on this side of the interface", remember the Primary partition is providing the actual calls for Logical Partition access and then the interface tells the head/servo where to go), drive.

So to recap - any data that comes/goes to an extended partition must first wait for any Primary partitions to clear the route.  Then just in order to get/put the data in the right place the processor on the HDD controller has to calculate from the actual physical geometry what the "advertised geometry" would need to be for the extended partition and then repeat this process for each data pack. It becomes very processor/controller intensive very quickly.  It's why Primary partitions are almost always preferred for OS's to boot from.  Ditto for Swap Partitions. It's why IDE bogs down so badly compared to SCSI and later interfaces, ESPECIALLY when you also have a Primary and Secondary hard drive on the same Channel.  This is because the Primary drive (Master) provides controller services for  BOTH the Master and the Slave disk.  This is why it's so much faster to copy from a Master drive on one channel to a Master drive on the second channel rather than from a Master to a Slave.  They can't transfer data via the Master and Slave simultaneously as the Master controller provides all translation services and can only handle one at a time.  This is the same issue as our Primary/Extended drive issue just on a whole other level. A last quick note in response to a question concerning this, the Master/Slave issue can be resolved by Cable Select negotiations on a modern or "current" IDE interface - if everything works together. But for our purposes Microdrives still only adhere to "yesteryear" performance specifications of the ATA-33 and prior implementations that almost always HAD to have Master/Slave configurations.

We can sum all of the above up to be "translation overhead" that is dramatically increased when the most used partitions are also on extended/Logical partitions. You are just introducing two more translation levels as well as the lower priority issues as compared to avoiding all of it by putting that data on Primary partitions in the first place. This "fault" if you will can be rooted in the OS's drivers as well as the initial hardware interface translation on the controller itself.

That's the hard part. The easy part of the answer to this question is much more simple to understand for most people. In most drive geometries Primary Partitions almost always get the "favored location" for data.  The two "favored locations” are at the first track and the middle tracks.  This is because the physical head of the drive is most often over those two tracks - much more so than anywhere else on a drive.  This is why just about every operating system in existence that uses physical hard drives as their operating medium by nature will put it's mostly accessed files on the first or middle tracks IF the user partitions the entire physical drive to one large partition.  Things get muddled really fast with multiple partitions are used as the OS has no real way of knowing where the new physical First and Middle tracks are located.

But one thing is ALWAYS true. Extended/Logical partitions are NEVER located on the first track and are usually located PAST the Middle track as well in REAL WORLD APPLICATION simply by fact that they are almost always placed AFTER the primary partitions are physically on the Drive.
Quick second half recap - data that is placed AFTER the Middle physical track takes longer to get to simply because the servo arm/head has to travel farther out of it's normal range to get to the track. Period.

Add the two together and you end up with a worst case scenario for a HDD with a platter geometry. Not only does it take the CPU and software drivers and the CPU on the HDD controller longer to translate HOW to get to the data- but once it does it takes the physical servo arm LONGER to travel to the spot it needs to get it from. You'll immediately notice that one of these problems are contained within the OS and its drivers and the other is completely within the IDE HDD controller itself.

Does it contribute to real world performance hits in physical Hard drives? You betcha.  These are known basic issues that have been around for as long as hard drive technology itself.  Most end users and even programmers don't know the details of WHY certain partitioning schemes give better performance but it's been ground into the community for ages to simply do things like Put your OS and Swap files in the earliest Primary Partition available. (This is also where the old "but still true" mantra of "put your swap partition on the earliest Primary partition of your least used physical drive" for best server performance comes from.)  But feel free to run your own performance tests if you doubt the rationale here, it never hurts to not take someone elses word for granted!


Quote
As for extended partitions, I would think that any extra work to access these would be done at mount time.  Once they are mounted and the drivers know the addresses of the partitions I would expect there to be zero performance impact.  Why do you think differently?


Oops I already covered most of this above.  Also the software drivers only know the "advertised" or in our case "LBA" addresses of anything on the hard drive.  The cpu on the HDD controller must then translate from LBA into the actual physical drive geometry.  Hence the bottleneck and performance hit explained in long form above.  Your software drivers of ANY OS that uses a modern IDE HDD is completely blind to the actual drive geometry.  Even the CMOS of your desktop computer is blind to it and only knows/uses what the LBA geometry is that is reported from the HDD itself. The CPU on the HDD controller then translates the value that the OS calls for to the real physical value on the HDD.  The reason this is done this way is to overcome “would-be” geometry limitations like we used to have back with early IDE, RLL, and MFM drives.  It’s also the general basis of the problems and solutions of operating systems being able to recognize drives beyond a certain size/geometry.  The actual drive geometry is COMPLETELY known only to the physical electronics of the HDD controller that is mounted on the drive and is never exposed to the Operating System.  In this way the OS can use HDD’s with capacities MUCH greater than the System Builders or Operating System engineers ever imagined possible when they released their products. The HDD controller (that is mounted on the drive itself for IDE drives) does all the work for this and in doing so also becomes our performance bottleneck here.


Quote
As for loopback, the magic that takes place there is done in software and in RAM: I would expect that there are no additional device interactions that take place.  I'd bet that any performance impact would be hard to measure, much less sense on a human level.  Again, why do you think differently?


You're exactly right "the magic that takes place there is done in software and in RAM". I couldn't have said it better myself.  And because of this both the software and the RAM required to make this loopback device translation require extra CPU cycles and CPU as well as memory bandwidth.  By definitition any thing that you add that requires additional software/RAM to handle will add processor overhead and incur a performance hit.

However in certain circumstances you have a valid point.  With pure flash memory- specifically with sdcard's when used with Zaurii are affected by this.  Because of Sharp's rediculous insistence on MMC compatibility mode implementation of the SD card slots the performance of any particular SDcard may be severely limited because of this bottleneck. For example a SD card that is advertised as 10x speed may be a bit faster than a normal SD card in a Zaurus SD slot, but a 32x speed card will offer no more performance gain than a 10x card because of this enforced bandwidth limitation.  Because of this bottleneck you can use a SD card formatted FAT and offer yourself ext2/3 storage availability via a loopback device without hardly any performance hit at all.  Testing by OESF members have  put the entire performance hit at about 1% of the bandwidth being used in these SD transfers.  So it can be a pretty smart move to use a loopback device with a SD card on a Zaurus.

However the CF slots do not have this natural bottleneck limiting performance.  Because of this the percentage hit increase when using a high speed CF device with a loopback device floating on its partition CAN be very substantial.  The faster the CF device, the worse the performance hit.  The bandwidth/CPU overhead performance hit for using a loopback device on a FAT formatted CF device can rise as high as 30-33%!  Especially when you are using the CF device to run something large that runs completely off the CF drive partition in ADDITION to a SWAP partition can easily incur this sort of hit. (BTW this is regardless of Primary/Extended placement.)  A good example of an application that would fit this scenario would be running X/Qt and a Swap Partition on the same CF drive using a loopback device. OUCH. Almost 100% of your data bandwidth calls have to be routed through this loopback translation and the SOFTWARE and RAM that provide this magic have to steal processor cycles from your CPU for each and every packet. And every time it does there are less CPU cycles and RAM available for running the actual program your using.

You can find most of the information that you would need to look into this further or verify any of the above info right here on the oesf forums.  Just do a search for sdcards, and loopback devices etc.- it’s how I actually found out that the performance hit was so low for sdcards in the first place, (much to my surprise at the time.)


Quote
I think the same could be said about swap files vs. swap partitions.  (In fact, I would not be surprised to find that loopback is used to implement a swap file.)  I doubt you'd really experience any difference in performance.


I’ll try to keep this one brief simply because of how well known of a performance issue it is. (Nobody pass out here- I know I’m not brief often.)  The difference between using a swap file verses a swap partition is very real and very measureable.  The greater the intensity of which the partition/file that resides within the loopback device is used the greater the performance hit.  You can search for good info on this very topic right here on these forums as well.  The fact that in this case is that very performance hit is being amplified by the Extended partition/translation overhead issue etc.also and only makes the performance hit that much larger.

You do bring up an interesting point about the Swap and loopback issues. I can clarify it for you a bit. The analogy is EXACTLY correct when applied to a swap file as it is simply a swap partition formatting being superimposed over a regular drive partition – and this magic is accomplished of course using the magic of Software and RAM. Sound familiar?  Again any layer of translation is always accomplished by an additional layer of software that uses additional RAM (ironically this also works your swap file/swap partition that much harder) and stealing cycles from your Zaurus’s CPU and available bandwidth all the while.  A Swap Partition on the other hand is a partition that must be formatted either by the user after it’s creation or by the system during it’s first implementation. In that respect it’s just like any other kind of partition and dislike a loopback device- no translation layer is needed. So you had the right idea you were just applying it over too much of a general area.


Quote
And lastly, I see no reason for there to be any kind of reliability hit with either of these.  If they work, they work.  What would make them any less reliable than other solutions?


The reliability issues here quite frankly are MUCH more difficult for me to explain away because the truth is – they are NO WHERE NEAR as great an issue as the performance issues are.  You’ve got me cold on this one I must admit.

The only shred of evidence that I’ll proffer in this respect is that if the primary partition(s) that the Extended partitions are attached to OR the Logical partition that they themselves float on becomes corrupted the Extended partitions are most likely laid to waste as well. This doesn’t happen often, and even when it does with modern IDE technology it’s usually somewhat recoverable.

To sum it up I quickly tossed out the “reliability” card onto the table and equated it to the performance issues and did so without thinking it through and in doing so unjustly represented the facts. Thank you for pointing this out – if we don’t hold ourselves accountable when we’re incorrect, we lack the integrity to be believed when we are!


Quote
The one thing about your set-up that would bother me is the two "vestigal" partitions.  They do no harm except wasting some space, but it's somewhat ugly to have to keep them.  I would expect though that this can be fixed easily if it truly is only scripts that control initialization.  OTOH, if Sharp stuck something boneheaded in their proprietary code, you'll probably be living with this for a while.


I agree completely and whole heartedly.

To readers of this post/thread let me take a moment out to turn things around and completely defend the right of Ray’s questioning my performance issues. My line of logic and what he was probably basing his doubts upon are two differing technologies.  In his defense all of the CF devices based performance issues that we’ve discussed in this posting would completely flipflop if we were talking about CF Flash Memory cards rather than Microdrives specifically!  Almost all of the performance penalties that I’m complaining about are unique to an actual physical hard drive with physical heads, servo’s and spinning platters and their electronic components that control their movements!  If we were to be talking about CF Flash memory cards instead then just about 100% of these performance hits that are the subject here would not exist because the controlling circuitry is VERY different and CF flash cards have no major moving parts whatsoever. Keep in mind that Microdrives are just that- they are miniriature HDD’s in every respect- just on a much smaller scale. So don’t be too quick to think he was completely in left field for putting forth his doubts.

If these topics interest you either way, I would encourage you, the reader, to not take either of our words on this topic as gospel truth but rather spend  a half hour or so poking around the forums here and the internet in general- you’ll end up with a MUCH better understanding of how hardware and software issues affect your end performance on your Zaurus. Many of these things are things that you the user can easily control on your Zaurus and by using your resources and setup properly can see nice performance gains without any additional monetary expenditure. And THAT is ALWAYS a good thing!

Something else to note is that HOW you use your Zaurus and what you use your Zaurus FOR will impact greatly on whether you personally see any real world performance gains.  In my case I will be using X/Qt and some X based programs that demand desktop/server level memory and storage resources in order to perform well.  Because of this the things I’ve discussed matter a LOT in how fast my Zaurus will perform under such a load. And since these things ARE something I can control. I’ve chosen to do so as much as possible since things like upgrading my C3100 to a faster CPU and/or more physical RAM are impossible options to me at the time of this writing.  However if you are someone who is more apt to use streamlined native Qtopia programs written specifically for your Zaurus you may never even need a Swap File or Swap Partition etc. in the first place!  As a matter of fact if you do not normally use enough RAM to warrant the need for one, installing them will only DEGRADE the performance of your Zaurus.  So for my particular usage these matters, strategies and precautions make sense.  For others they may not!

I also must close by confessing that ANY performance inhibiting thing that exists in my Zaurus that I feel should or could be changed drives me CRAZY until it is fixed. I am an absolute performance nut, overly zealous - a performance junkie I suppose.  While every point that I’ve made is true within it’s own context several of these issues are difficult enough to set up that many users would simply not find ANY performance boost justification enough to go to the trouble to tackle them.  This is even more so true if the boost would be minimal since their normal Zaurus usage doesn’t push the resources already available beyond normal usage limits.


Quote
Anyway, glad to see you got your system working.  Good luck with it.
~ray
[div align=\"right\"][a href=\"index.php?act=findpost&pid=107495\"][{POST_SNAPBACK}][/a][/div]


Thank you! I’m very glad too, and as always I wish you and everyone the best with theirs.  Please don’t feel that I went to all of this trouble to be confrontational, rather I was excited and overly thrilled that for once someone was asking questions that I had intimate knowledge and the ability to give detailed and hopefully helpful answers to you and other users that may trip over this post! (This doesn’t happen often.)

So thank you for the opportunity it has afforded me to help anyone who may learn from this info.  It makes me feel better to have a few tidbits to give back to the community that I take so much from so often.

For anyone who’s interested the majority of the knowledge expressed in this post was from working as a line technician in a robotically driven IDE storage manufacturing facility for several years. I may not know much- but what I do know I know pretty well. =)

Cheers!,
-NeuroShock

EDIT: The first response was edited for clarity when a reader pointed out that part of the blame lay on the OS/driver side of the issue as well as on the HDD controller. This has been corrected. (Thanks for the keen eye and quick heads up.)
« Last Edit: December 19, 2005, 10:04:46 pm by neuroshock »
[span style=\'font-size:8pt;line-height:100%\']SL-6000L & C3100.[/span]

Meanie

  • Hero Member
  • *****
  • Posts: 2803
    • View Profile
    • http://www.users.on.net/~hluc/myZaurus/
Nevermind. It's Been Fixed.
« Reply #23 on: December 19, 2005, 07:21:22 am »
This is one of my future projects for when I get a bigger cf card or have time on my hands.

Since the C3100 is not dependent on the partition geometry (ie sizes) but rather the partition names (/hdd1, /hdd2, /hdd3) it is possible to just resize those partitions.

I plan to make /hdd1 my swap partition, /hdd2 ext3 partition for applications and /hdd3 fat32 for file storage and usbdisk.
SL-C3000 - pdaXii13 build5.4.9 (based on pdaXrom beta3) / SL-C3100 - Sharp ROM 1.02 JP (heavily customised)
Netgear MA701 CF, SanDisk ConnectPlus CF, Socket Bluetooth CF, 4GB Kingston CF,  4GB pqi SD, 4GB ChoiceOnly SD, 2GB SanDisk SD USB Plus, 1GB SanDisk USB Plus, 1GB Transcend SD, 2GB SanDisk MicroSD with SD adaptor, Piel Frama Leather Case, GoldX 5-in-1 USB cable, USB hub, USB mouse, USB keyboard, USB ethernet, USB HDD, many other USB accessories...
(Zaurus SL-C3000 owner since March 14. 2005, Zaurus SL-C3100 owner since September 21. 2005)
http://members.iinet.net.au/~wyso/myZaurus - zBook3K

speculatrix

  • Administrator
  • Hero Member
  • *****
  • Posts: 3706
    • View Profile
Nevermind. It's Been Fixed.
« Reply #24 on: December 19, 2005, 08:08:34 am »
a quick note about using dd, "cp -pr" and tar.

dd is a great way of copying the raw data which makes up a file system. Unfortunately, it's also very dumb - it copies used and unused blocks alike, so a disk partition of 4GB with only one file on it will still create a 4GB dump. Of course, you can compress the output of dd quite successfully. Creating a single very large file filled with zero can help a lot here, as it ensures as much of the disk is filled with zero as possible. dd is only suitable for copying a disk to another disk when the partition sizes are the same, otherwise you can have some very odd problems.

"cp -pr" will indeed copy a filesystem. The snag is that it doesn't understand symbolic links, so if you have say
  libsomething.so, libsomething.so.1, libsomething.so.1.1
where the two former are a soft link to the latter, when you do the copy you'll end up with three files, not two.

tar is often the best way to copy a file system, as it can not only preserve ownership but also symbolic links:
  cd olddir
  tar cf - . | (cd newdir ; tar xf -)

what this does is to tar up the current directory and downwards, sending stdout (writing the tar) to a pipe, then in another process CD'ing to the destination, and unpacking the tar file from stdin (this is what "-" means... either write to stdout or read from stdin).

using tar like this is perhaps the best way to copy a disk filesystem from one place to another. Strictly speaking, you should use "tar xfBp -" to unpack, because it blocks on read; it's usually the default in most systems when the input is specified as stdin (the "-" char says read from input).

you can also make backups like this:
    tar cf - . | gzip > /tmp/mybackup.tar.gz

or even copy the filesystem from one machine to another:
   tar cf - . | gzip | ssh othermachine "cd newdir ; gunzip | tar xfBp -"

note that gzip and gunzip are usually the same file, with a softlink from one to the other, and the program works out which one is which when run.

hope this helps
Paul
« Last Edit: December 19, 2005, 12:16:00 pm by speculatrix »
Gemini 4G/Wi-Fi owner, formerly zaurus C3100 and 860 owner; also owner of an HTC Doubleshot, a Zaurus-like phone.

bam

  • Hero Member
  • *****
  • Posts: 1213
    • View Profile
    • http://thegrinder.ws
Nevermind. It's Been Fixed.
« Reply #25 on: December 19, 2005, 11:52:34 am »
this is perhaps the most useful thread I have ever read, with your guy's ok I will copy sections to my site especially the hard-drive/swapfile-drive/loopback-device, Great Work Neuro!
SL-C3100 current: Stock/Tetsu 18h
Socket BT CF Card
Linksys WCF-12 802.11b/Cheapie USB Ethernet

The Grinder

speculatrix

  • Administrator
  • Hero Member
  • *****
  • Posts: 3706
    • View Profile
Nevermind. It's Been Fixed.
« Reply #26 on: December 19, 2005, 12:21:13 pm »
more on dd, tar, cp

on linux, you can use "dump" to dump a filesystem to a backup device... on solaris, this is called "ufsdump" to make a backup of the file system; this is a bit more robust than tar - it works at a lower level. I'm not sure if dump has been built for the Z.

there's also a command called "cpio" which is more powerful than tar; I very rarely use it though, but it's worth being aware of it if you want to control how and what to archive more flexibly than with tar.
Gemini 4G/Wi-Fi owner, formerly zaurus C3100 and 860 owner; also owner of an HTC Doubleshot, a Zaurus-like phone.

neuroshock

  • Sr. Member
  • ****
  • Posts: 269
    • View Profile
    • http://
Nevermind. It's Been Fixed.
« Reply #27 on: December 19, 2005, 12:28:09 pm »
Bam,

I agree - VERY useful info flying around everywhere in this thread. I've learned a LOT myself.  By all means you have my explicit permission to reuse any portion of what I've posted that you may find useful to yourself or others.
Your site is a wonderful repository of knowledge and a great asset to the Zaurus community.

Have a Great Day All!,

-NeuroShock
[span style=\'font-size:8pt;line-height:100%\']SL-6000L & C3100.[/span]

bam

  • Hero Member
  • *****
  • Posts: 1213
    • View Profile
    • http://thegrinder.ws
Nevermind. It's Been Fixed.
« Reply #28 on: December 19, 2005, 12:32:12 pm »
Quote
This is one of my future projects for when I get a bigger cf card or have time on my hands.

Since the C3100 is not dependent on the partition geometry (ie sizes) but rather the partition names (/hdd1, /hdd2, /hdd3) it is possible to just resize those partitions.

I plan to make /hdd1 my swap partition, /hdd2 ext3 partition for applications and /hdd3 fat32 for file storage and usbdisk.
[div align=\"right\"][a href=\"index.php?act=findpost&pid=107576\"][{POST_SNAPBACK}][/a][/div]



can you put a directory on a swap partition? ie .sys?

cool Neuro, put it over there already...good stuff!
« Last Edit: December 19, 2005, 12:33:39 pm by bam »
SL-C3100 current: Stock/Tetsu 18h
Socket BT CF Card
Linksys WCF-12 802.11b/Cheapie USB Ethernet

The Grinder

cybersphinx

  • Jr. Member
  • **
  • Posts: 69
    • View Profile
    • http://
Nevermind. It's Been Fixed.
« Reply #29 on: December 19, 2005, 01:15:39 pm »
Hm... some of your explanations are completely contradictory to what I know about computers (PCs mainly, some things might be different on other platforms, though I don't know why they should be).

Quote
As for this first question this is simple. The main performance hit comes from drive geometry issues and hardware performance from a hard drive's perspective more than the CPU or software. Seeks being delivered to drive partitions are routed by priority.

That's the first time I hear this. You make it sound like every partition is a separate device, which gets addressed separately on the bus itself (i.e. in hardware). But (as far as I know, but I'm pretty sure of that) the hardware only knows about the whole disk, the partitioning just concerns the software. So every access to the disk gets handled when it arrives (well, perhaps not anymore, since the drive firmware probably does some optimizing the access). Any prioritization in partition access will (or will not) be done in software.

Quote
(Much like irq's establish priority for devices recieving the CPU's attention and therefore bandwidth.) So Primary partitions recieve primary routing. Extended partitions must take back seat to Primary partitions when routing conflicts occur.  And they occur a LOT on an ide bus.

Like I said, the IDE bus doesn't know anyting about partitions, so there are no partition based priorities.

Quote
Further, Extended partitions from a drive geometry translation perspective are just that- extended partitions that are extended FROM a primary partition.  Actually it would be more accurate to say they are extended THROUGH the Primary partition.  For Extended partitions, not only do all routing calls have to be delivered through the interface via the Primary partition, but by definition extended partitions sit on a Logical partition also.

On a usual PC harddisk there can be four primary partitions (defined in the master boot record's partition table), for compatibility with DOS-based systems (up to Windows ME, and the NTs probably haven't changed anything there for compatibility's sake), and usually (there are exceptions) DOS-based systems can only see one one those. To get around the four partition limit (and to get more than one partition in DOS) logical partitions were invented. Those are the same than primary partitions, but include a partition table themselves.

A pure Linux system can work without any partitions, you can just use the whole device and create a file system on it (like "mkyourfavouritefs /dev/hda; mount /dev/hda /mnt"). Or use a non-DOS partitioning scheme, which probably don't have those limitations in the first place. Of course, your disk will be incompatible with DOS-systems then, but who cares?

Here are two links about partitions: http://www.ranish.com/part/primer.htm and http://www.lissot.net/partition/partition-03.html.

Quote
As you were inferring about Swap Drives possibly being a loopback device (and we'll address that shortly) the extended partitions are somewhat of a similar loopback device system that sits upon a Logical (in this case meaning "doesn't really have a physical geometry", remember the Primary partition is providing the actual calls for geometry access when the head/servo need to know where to go), drive.

The only translation that's done is from LBA to the actual drive geometry in the drive's controller, that shouldn't be a performance issue (but could be, given the usual stupidity in PC hardware).

Quote
It's why Primary partitions are almost always preferred for OS's to boot from.  Ditto for Swap Partitions.

That comes from the time when the fastest transfer rate was on the first sectors of a disk. Nowadays you can't usually say where access is fastest, since you don't know which logical addresses are mapped to which physical sectors.

Quote
It's why IDE bogs down so badly compared to SCSI and later interfaces,

That's because SCSI is more intelligent about data transfers, especially if there are lots of devices involved. IDE was just the cheaper and fast enough solution for the masses.

Quote
ESPECIALLY when you also have a Primary and Secondary hard drive on the same Channel.  This is because the Primary drive (Master) provides controller services for  BOTH the Master and the Slave disk.

Quoted from http://en.wikipedia.org/wiki/Advanced_Technology_Attachment: "Although they are in extremely common use, the terms master and slave do not actually appear in current versions of the ATA specifications. The two devices are correctly referred to as device 0 (master) and device 1 (slave), respectively. It is a common myth that "the master drive arbitrates access to devices on the channel." In fact, the drivers in the host operating system perform the necessary arbitration and serialization. If device 1 is busy with a command then device 0 cannot start a command until device 1's command is complete, and vice versa. There is therefore no point in the ATA protocols in which one device has to ask the other if it can use the channel. Both are really "slaves" to the driver in the host OS."

The problems with two devices on the same bus are: 1. Only one device can use the bus at the same time, and 2. The bus runs at a speed both devices support, so a slow device limits the speed of a faster one. (Both might have changed in the last years, I don't really know. But it surely is the base for the "two devices on the same bus are slower than on two busses" saying.)

Quote
This is why it's so much faster to copy from a Master drive on one channel to a Master drive on the second channel rather than from a Master to a Slave.

When the devices are on two busses, both can be accessed at the same time, while on one bus one device always has to wait for the other to have finished its transfers.

Quote
We can sum all of the above up to be "translation overhead" that is dramatically increased when the most used partitions are also on extended/Logical partitions. You are just introducing two more translation levels as well as the lower priority issues as compared to avoiding all of it by putting that data on Primary partitions in the first place.

The only extra "translation" that is done when accessing logical partitions is that the addresses have to be read from the partition table in the extended partition in addition to the one in the master boot record.

Quote
That's the hard part. The easy part of the answer to this question is much more simple to understand for most people. In most drive geometries Primary Partitions almost always get the "favored location" for data.  The two "favored locations” are at the first track and the middle tracks.  This is because the physical head of the drive is most often over those two tracks - much more so than anywhere else on a drive.  This is why just about every operating system in existence that uses physical hard drives as their operating medium by nature will put it's mostly accessed files on the first or middle tracks IF the user partitions the entire physical drive to one large partition.  Things get muddled really fast with multiple partitions are used as the OS has no real way of knowing where the new physical First and Middle tracks are located.

But one thing is ALWAYS true. Extended/Logical partitions are NEVER located on the first track and are usually located PAST the Middle track as well simply by fact that they are almost always placed AFTER the primary partitions are physically on the Drive.
Quick second half recap - data that is placed AFTER the Middle physical track takes longer to get to simply because the servo arm/head has to travel farther out of it's normal range to get to the track. Period.

Well, nowadays that's not necessarily true anymore, since one logical address can be (almost) everywhere physically on the drive, and the data mapping can differ between drives, as well (see http://www.lissot.net/partition/mapping.html).

Quote
You'll immediately notice that both of these problems are contained within the IDE HDD controller itself and have little/nothing to do with the actual CPU, bandwidth etc. of your Zaurus.

That's only half true. There should be no performance penalties for using a logical partition instead of a primary one (provided both use the same area of the disk).

Quote
Quote
As for extended partitions, I would think that any extra work to access these would be done at mount time.  Once they are mounted and the drivers know the addresses of the partitions I would expect there to be zero performance impact.

Right.

Quote
The HDD controller (that is mounted on the drive itself for IDE drives) does all the work for this and in doing so also becomes our performance bottleneck.

But it should be able to do this as fast as the interface requires (except perhaps on some really cheap drives - after all, it's still PC hardware...).

Quote
Quote
As for loopback, the magic that takes place there is done in software and in RAM: I would expect that there are no additional device interactions that take place.  I'd bet that any performance impact would be hard to measure, much less sense on a human level.  Again, why do you think differently?

You're exactly right "the magic that takes place there is done in software and in RAM". I couldn't have said it better myself.  And because of this both the software and the RAM required to make this loopback device translation require extra CPU cycles and CPU as well as memory bandwidth.  By definitition any thing that you add that requires additional software/RAM to handle will add processor overhead and incur a performance hit.

Loopback devices (and swap files) have a certain performance hit, because every access has to be done through a file system, which is noticeably more complex than directly accessing a device itself. This gets more noticeable when the device gets faster in relation to the main CPU doing all the work (as in the Zaurus, where you have a relatively slow CPU).

Quote
Quote
And lastly, I see no reason for there to be any kind of reliability hit with either of these.  If they work, they work.  What would make them any less reliable than other solutions?

I guess there is a larger chance of things going wrong when going through a file system, but that shouldn't be an issue (I wouldn't use such that file system for anything else then).

cybersphinx

PS: Damn, somewhere I screwed up the quoting, but I don't see where. Sorry for that.