Author Topic: Hdparm -t Results  (Read 2849 times)

adf

  • Hero Member
  • *****
  • Posts: 2807
    • View Profile
    • http://
Hdparm -t Results
« on: July 01, 2007, 03:42:00 am »
on my 3100 pdaxii13 standard (416) clockspeed icewm, hdparm -t gets me:

hdparm -t /dev/hda3 2.43 MB/sec  

hdparm -t /dev/mmcda1 2.99 MB/sec (150x 4G SD)

not too speedy. am I missing something?
what kind of speeds are you folks seeing?
how about bsd?
rc 198?
Angstrom?

these are read speeds... anyone know how to get write speeds?
**3100 Zubuntu Jaunty,(working on Cacko dualboot), 16G A-Data internal CF, 4G SD, Ambicom WL-1100C Cf, linksys usb ethernet,  BelkinF8T020 BT card, Belkin F8U1500-E Ir kbd, mini targus usb mouse, rechargeble AC/DC powered USB hub, psp cables and battery extenders.

**6000l  Tetsuized Sharprom, installed on internal flash only 1G sd, 2G cf

Da_Blitz

  • Hero Member
  • *****
  • Posts: 1579
    • View Profile
    • http://www.pocketnix.org
Hdparm -t Results
« Reply #1 on: July 01, 2007, 09:15:48 am »
sounds about right, there are a copule of options ou can fiddle with but your PDA will crash moments latter (eg 32bit support)

there isnt much you can do to help this except a swap file and incrsese the swappiness, that way you can get a bit more free ram for caching files.

for read speeds try timing a cat <file>, i cant remeber but i belive there is a time command under pdaXrom
Personal Blog
Code
Twitter

Gemini Order: #95 (roughly)
Current Device: Samsung Chromebook Gen 3
Current Arm Devices Count: ~30
Looking to acquire: Cavium Thunder X2 Hardware

qx773

  • Full Member
  • ***
  • Posts: 219
    • View Profile
Hdparm -t Results
« Reply #2 on: July 01, 2007, 10:21:24 am »
Printing a file to the screen with the cat command would not be a good measure of read speed from a flash memory device.  The latency and bandwidth of the display would be the limiting factor.  If you already have a large file on a flash memory device, you could try copying it to /dev/null, such as:

time nice -9 cp /usr/mnt.rom/cf/swapfile /dev/null

The file should be large enough so that it does not fit inside cache memory.

adf

  • Hero Member
  • *****
  • Posts: 2807
    • View Profile
    • http://
Hdparm -t Results
« Reply #3 on: July 01, 2007, 04:06:52 pm »
so the issue is useable hardware limits much more than software?  ( I ws running swap via swpd on the sd card when I ran the test)
the reason I got interested was that I think I'll swap out the internal MD for a 16GB CF--- I wondered if the Z would be faster or slower and if the "mere" 40x speed of the a-data card I ordered would limit performance on the Z.  I'm guessing a minimal increease in read speed, faster finding of files/lower latency ?
**3100 Zubuntu Jaunty,(working on Cacko dualboot), 16G A-Data internal CF, 4G SD, Ambicom WL-1100C Cf, linksys usb ethernet,  BelkinF8T020 BT card, Belkin F8U1500-E Ir kbd, mini targus usb mouse, rechargeble AC/DC powered USB hub, psp cables and battery extenders.

**6000l  Tetsuized Sharprom, installed on internal flash only 1G sd, 2G cf

Da_Blitz

  • Hero Member
  • *****
  • Posts: 1579
    • View Profile
    • http://www.pocketnix.org
Hdparm -t Results
« Reply #4 on: July 02, 2007, 08:42:54 am »
i would be careful with the CF card, they are good and save power (hell i swapped the HD on a kohjinsha and now get 6hrs+ min, 8hrs max) however you really really want to play with the formatting options as it will lengethn the life of the card and greatlly speed up filesystem acsess, also considering diffrent FS's would help

CF usage cuts into usable RAM speed, so reading from a CF card slows down how much data you can dump from ram. i remeber the wince guys noticing a speed imprvment on slow mem devices by using an SD card for storage rather than a CF card (diffrent bus)

if you find the sweet spot with the diffrent FS's and block sizes then you might see a speed improvment, the sandisk site used to have a whitepaper (now disappeared) that stated the exact situation where one would get 40x speed. for large files it was easy provided you maxed out the block size of the filesystem as the flash they were using had a 128KB erase size, and reading and writing 512 bytes at a time ment that the same block got erased sevral times.

of course you woludnt notice this on a microdrive where there is no minimum erase size however falsh is a diffrent ball game.

have a look at teh man page howevr a quick glance reveals that mkfs.ext3 -b 4096 -O dir_index,filetype <dev>

note the -O stuff may be unsafe, i use it and havent had any problems and the dir_index thing sounds nice. the -b changes the block size to the largest valid size, this means if you have a file smaller than 4K the extra size is wasted, however you have 16GBs dont you?

i belive that fat32 and xfs can both do lager block sizes (fat32 is up to 128KB which is ideal, and i belive xfs was designed for large devices) but both are out of the question.

riser3 has "tail packing" which would jam sevral files in that 4K space, but i am not sure of distro support for that.

tried that code and for some reason /dev/null is always cached and i cant fill it up. however i get incredable compression
Personal Blog
Code
Twitter

Gemini Order: #95 (roughly)
Current Device: Samsung Chromebook Gen 3
Current Arm Devices Count: ~30
Looking to acquire: Cavium Thunder X2 Hardware