i wrote a couple of shell scripts that act like 'locate', in that they:
- create an index of all (or some of) the files on your computer.. ahmm, zaurus
- "user friendly front end" usage of grep + less -p
in addition to standard 'locate', though, and that's the catch, they read into archives (eg. ZIP, TAR, TAR.BZ, RAR, IPK, DEB, ...).
now i do once in a week/month "reindex". it takes some time extracting archive lists, etc.
and when i need to search, i just "
# locate str1 [str2]
and i get somehing like
the answer is almost immediate, cpu usage is nothing, flash lives longer :-) etc
the average size of the index on my c860 is ~1.5MB. i guess i could use bzgrep or something to make it around 300kb.
the size of the scripts is few Kb.
it depends on
- python modules: zipfile, tarfile, re (from standard library)
- find (to produce file list)
- grep (you all have it, dont you?)
- less (not a must, but it's very convinient to have)
install python and dependant modules
copy scripts to /usr/bin or whatever in your PATH
run "reindex.py" once in a while to refresh the archives.
reindex --help for options.
when index is built, use "locate.sh file1 [file2]"