Help - Search - Members - Calendar
Full Version: Serving Wikipedia Locally On The Z
OESF Portables Forum > Everything Else > Archived Forums > Zaurus General Forums > General Support and Discussion > Software
With the availability of Wikipedia dumps I'm searching for the best way to serve these and browse them in a web browser locally on the Z.

There has been some discussion - recent - of a user using to convert Wiki dumps and using a squashfs file image to view them on a Zaurus. However the dump format is new now, XML I think.

So I'd like to ask whether anyone is doing it and what are you using for this. I've seen this which seems like a good choice, since I'm already running LAMP on the Z. However you don't get images...

Any recommendations on what the best way to do it on the Z? Obviously it would have to be a squashfs image in order to fit the data on a card. But how to do it? Like the above post? Convert on a PC to a MySQL format? Anyone doing it this way or another?
I would be interested in this also, and believe Meanie said at one point that he was working on this. Perhaps one day soon he will surprise us with a post. wink.gif

Rather than running mediawiki, apache, etc, there is an alternative approach that sounded appealling at first -- converting the data files to compressed html files that could be browsed directly. However, after a little investigation, it seems that there is so much variation in file format and loss of functionality involved that it did not work very well, and the conversion programs I saw didn't seem to be actively maintained.
I don't know if this is off topic.
I use Zbedic and after looking at Meanie's page, I downloaded
It doesn't have any pictures, just text.
Google should find it for you. It is 412MB and slows down launching of zbedic.
I just put it on my C3100 hard drive in same directory as the dictionary file that I normally use with zbedic.
It only slows down zbedic if it's enabled in the dictionary selection screen.
I think, I'll try making another copy of zbedic and rename it to zbedic2.
With 2 different copies of zbedic, I could load one with the usual dictionary file for fast lookup, and load the second one with the wikipedia, since it launches so slowly

EDIT: I tried making a second copy of the binary zbedic as zbedic2 and I also put it in a different tab, but if I load the wikipedia into one of them, it's loaded into both of them.
This kind of makes the "Dictionary" function of zbedic usless for a quick word lookup app.
I'm sure this is the reason, I tried the wikipedia once in zbedic and removed it.
I thought having 2 binaries with different names would allow me to run two different instances of zbedic and have one loaded with just the dictionary, and the other loaded with both the wikipedia and dictionary.
I have used this same method of copying and renaming a binary once before, and it still works for qkonsole.
(I have 2 copies of qkonsole in 2 different tabs. One runs in magnified mode, the other runs in "normal" mode)
QUOTE(Jon_J @ Oct 29 2006, 11:31 PM)
I thought having 2 binaries with different names would allow me to run two different instances of zbedic and have one loaded with just the dictionary, and the other loaded with both the wikipedia and dictionary.

Both copies of the binary are sharing the same configuration files. What you need is 2 sets of configuration files and some way of switching between them, rather than 2 binaries.
QUOTE(Jon_J @ Oct 30 2006, 12:31 AM)
It doesn't have any pictures, just text.
Google should find it for you. It is 412MB and slows down launching of zbedic.

The current solution is to to check the "fast load" box for zbedic, if you do not mind having less RAM. The right solution would be implementing "lazy" loading of dictionaries, but I have never had time to do it.

Images for wikipedia would probably take too much space. It is possible to have images in zbedic starting from 1.1, but nobody has so far tried it with Wikipedia.

To have two copies of zbedic, you would probably need to edit "control" files in .ipk. But I do not recommed this solution.
I decided to disable wikipedia in the dictionary selector.
Launch Zbedic with just English dictionary - 3½ seconds.
Launch Zbedic with wikipedia & English dictionary - 8½ seconds.

I don't like using fast load.
I disable fast load on any app that I install, that has it enabled.
I'm keeping wikipedia on my HDD, so I can re-select it again if I need it.
The wiki2zaurus homepage hasn't been updated in a long time and the latest downloadable version there of the converted wikipedia is quite old. From what I read about changes in format in the wikipedia, it seemed that the old perl scripts would need a good bit of work to process the current wiki files but I could be wrong. Searching wiki2zaurus on this forum gives a couple of recent threads that suggest someone may be working on this or another approach.
Wikipedia needs to be on my Z. NEEDS.
You don't necessarily need apache installed on the Z to run cgi scripts - check out my posting on security/networking for a shell-script web server which will run a CGI, and take a look in general discussion for my squashfs-oesf-forum-archive.

maybe you can get a minimal mediawiki cgi running with a minimal mysql db backend.
a solution for a small wikipedia-specific server would be wikijserver: (parts in german)

it runs pretty good on my Z with the german "Exzellente Artikel" dump from the site, only problem is there is no dump for the complete wikipedia around, and so far I found no info on how to generate one. But IMHO this is the best solution for a complete wikipedia on the Z given we get a recent dump...


PS.: for all who want to try it, the wikijserverl_arm_1_4_12_all.ipk doesn't work with jeode or PersonalProfile (on cacko) but the ewe version with the ewe-runtime works without problems.
Here is a list of toolkits for converting the data:

Notably tomeraider is listed which is intended for portable devices but I know nothing that can read them on the Z.

My idea is to translate the XML into portabase format ( ) although I've no idea how yet.

Does anyone use portabase to know if it would handle a large database?
Hi all,

just to get this topic back into discussion - I do think that the way wiki2static works (compress and reformat downloaded dump, then use apache with cgi to generate pages and do searches) is the way to go. Installing a complete wiki with mysql to access the database is probably too much for the Zaurus, not only concerning the available storage space.

As I'm not really satisfied with the *bedic variants (neither "q" nor "z"), wouldn't it be great if someone with knowledge of perl had a look at wiki2zaurus? Maybe that someone could get it working again...
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Invision Power Board © 2001-2018 Invision Power Services, Inc.