Inventory tracking database: Difference between revisions
Line 10: | Line 10: | ||
=Overview= | =Overview= | ||
I intend to implement the new system as a MySQL database on [[dynamo]]. We can create arbitrary clients for this database with ODBC. Additionally I hope to make inventory reports accessible on this wiki via Special: pages. | |||
http://www.mediawiki.org/wiki/Writing_a_new_special_page | http://www.mediawiki.org/wiki/Writing_a_new_special_page | ||
Special pages are technically ''not'' editable, as they are dynamically constructed from a database call. Unlike with normal wiki pages, Special pages can be read-restricted as well, in case we feel the need to keep our inventory secret for some reason. | |||
I propose to build an inventory database | I propose to build an inventory database and use it to track our stuff. I further propose to use Wikimedia's user system to track our volunteers. This will mean, among other things, that every volunteer '''must''' have a wiki account- which in turn will require a full-time-ish wiki admin. Also I hope to implement a Kerberos or similar SSO system so that FGSEA volunteers' logins work on all FGSEA areas (save [[dynamo]]). | ||
Entries to the inventory database should be fairly automated, as follows: we'll build a custom bootable volume (cd, harddisk, usb stick, whatever) using a recent Ubuntu kernel and a minimal userland which, among other things will run [[lshw]] at startup. This script will then, if the machine is connected to a network, upload the resulting XML file, either to freegeek.org or to a local server, via curl. Then the XML file can be parsed into a data table by MySQL: http://dev.mysql.com/doc/refman/5.5/en/load-xml.html but this is a separate step that can be performed in a batch, possibly a cron job if we can come up with some method of validating the XML. Does that last matter? I'm not sure. | Entries to the inventory database should be fairly automated, as follows: we'll build a custom bootable volume (cd, harddisk, usb stick, whatever) using a recent Ubuntu kernel and a minimal userland which, among other things will run [[lshw]] at startup. This script will then, if the machine is connected to a network, upload the resulting XML file, either to freegeek.org or to a local server, via curl. Then the XML file can be parsed into a data table by MySQL: http://dev.mysql.com/doc/refman/5.5/en/load-xml.html but this is a separate step that can be performed in a batch, possibly a cron job if we can come up with some method of validating the XML. Does that last matter? I'm not sure. | ||
Tools and components: | |||
client end: | |||
#debootstrap - for building boot images | |||
#lshw | |||
#curl, wget, nc or similar | |||
server end: | |||
This bit is already set up :) | |||
=Project parts and necessary infrastructure= | =Project parts and necessary infrastructure= |
Revision as of 19:00, 11 November 2013
Introduction
In light of my failure with FGDB and the recent decision to pare down the number of different interpreters running on the freegeek.org server, I'm trying to work out how we may our existing infrastructure to construct an inventory system.
What we have already:
- Publicly-accessible server
- MySQL database
- httpd with mod_php
- Web framework (MediaWiki)
Overview
I intend to implement the new system as a MySQL database on dynamo. We can create arbitrary clients for this database with ODBC. Additionally I hope to make inventory reports accessible on this wiki via Special: pages. http://www.mediawiki.org/wiki/Writing_a_new_special_page
Special pages are technically not editable, as they are dynamically constructed from a database call. Unlike with normal wiki pages, Special pages can be read-restricted as well, in case we feel the need to keep our inventory secret for some reason.
I propose to build an inventory database and use it to track our stuff. I further propose to use Wikimedia's user system to track our volunteers. This will mean, among other things, that every volunteer must have a wiki account- which in turn will require a full-time-ish wiki admin. Also I hope to implement a Kerberos or similar SSO system so that FGSEA volunteers' logins work on all FGSEA areas (save dynamo).
Entries to the inventory database should be fairly automated, as follows: we'll build a custom bootable volume (cd, harddisk, usb stick, whatever) using a recent Ubuntu kernel and a minimal userland which, among other things will run lshw at startup. This script will then, if the machine is connected to a network, upload the resulting XML file, either to freegeek.org or to a local server, via curl. Then the XML file can be parsed into a data table by MySQL: http://dev.mysql.com/doc/refman/5.5/en/load-xml.html but this is a separate step that can be performed in a batch, possibly a cron job if we can come up with some method of validating the XML. Does that last matter? I'm not sure.
Tools and components:
client end:
- debootstrap - for building boot images
- lshw
- curl, wget, nc or similar
server end: This bit is already set up :)
Project parts and necessary infrastructure
Chunks of the project
This project can be conceptually broken up into a number of semi-independent pieces. That is, the person working on one piece shouldn't have to know much of anything specific about the other pieces. I will list them out here and if I can get people to adopt chunks, then I will put their names on each section so folks know who to contact for help.
Project Management
I'm afraid this will probably have to be me, User:Andrewk, unless someone else wants to do it. It shouldn't be to difficult to find someone more experienced at management than I am. This role will be responsible for setting up infrastructure like a git repository to hold scripts and other code. Also will need Admin or Bureaucrat access on the wiki.
Rolling up a custom boot volume
This is the part I'm currently working on. See Custom boot volume for more details. This should be a bit easier than it sounds. You can probably use SuSE Studio:http://en.wikipedia.org/wiki/SUSE_Studio for most if not all of it. I've not used it before, but it's free, so there you go. This boot volume will need a kernel that boots up like Ubuntu kernels do (that is, one that is not statically compiled and that probes hardware on startup). In fact a vanilla Ubuntu distro would probably work, with a few simple additions:
- It needs lshw and curl installed
- It needs a startup script (how do you do this in Upstart? idk) that runs lshw, captures the output as a file, and then uploads that file somewhere using curl.
- There's probably more, and certainly there are plenty of bells and whistles I can think of to add once we get the basic deal working.
Nothing that isn't already installed on the server should be required to get this working at a basic level. One or more additional servers might need to be built for intermediate steps, but that's pretty easy.
Setting up XML -> MySQL parser
This will involve figuring out how to make lshw's tree-structured XML output fit into MySQL, and designing the tables. The actual parsing can be done by a MySQL builtin, the prep work is pretty much all of it. This role could also make the "People" tables that relate information on volunteers, vendors, etc to inventory and Wikimedia user data. Obviously this requires some MySQL mastery, or at least general RDBMS knowledge. The MySQL-specific part isn't much.
Building the Wikimedia extension to display these database reports as Special: pages
Again, most of the work here is done for you, especially if you already are a Wikimedia hacker. If you are not, then there are a few special objects and data structures to know about: http://www.mediawiki.org/wiki/Manual:Database_access http://www.mediawiki.org/wiki/Writing_a_new_special_page This will require basic object-oriented PHP knowledge, and access to whichever group owns the Wikimedia code on the server (test server, that is).
There might be more stuff that I've forgotten about. Hopefully someone will come along and add / change things.
Concerns:
- Doing automatic uploads with hard-coded passwords to our live webserver is obviously a Bad Idea. It's probably advisable to set up a secondary, local server to receive the files and then upload them in a batch. The local server can even do the XML->MySQL conversion, I suppose.
- Converting XML to MySQL tables is not a trivial task. lshw's XML output may not be suited to it. Not knowing how the import proceeds yet I'm not sure how will lshw's varying output will work with SQL tables' fixed number of fields. Here is an idea of what the XML output looks like. http://ezix.org/project/wiki/HardwareLiSter#Attributes has more information on how lshw works. The problem seems generalizable to: how to collapse a tree into one or more RDB tables?
- This is a HUGE amount of work for one guy to do. I estimate this is a 100+ man-hour job for someone with my level of expertise. I don't have 100 hours to devote to this. A little help here?
- Those working on this project, or at least whomever plugs in the code to make the Special: pages, will need Developer status on the wiki. So far only Mark and Will have the power to grant this. The need is not immediate- a lot of groundwork needs to be laid first.