Consumerium:Retrospection
The original design paradigm dates back to 2001 when Juxo had no knowledge of the existence of such collaborative editing software such as wiki and the original plan was to
- Use XML for both facts and opinions to keep it human readable and editable to avoid becoming too technocratic
- Since then, wikis have proven that large numbers of people can adopt and use even a fairly complex wikitext standard, but much fewer have proven able to write XML.
- Keep Consumerium workable by applying strong algorithmic checks on insertion of data.
- This permission-based model is not the wiki way; the large public wikis all became large by relying on "fast revert" of vandalism and some process of dealing with trolls - although most do that rather badly; Division into Signal Wiki (strict) and Research Wiki (looser) may serve to allow for the open-ness and the strictness in different forums.
- Store historical data persistently so that later on trends could be discovered from the functioning of Consumerium Services mostly The Consumerium Exchange which is a metaphor for a place where NGOs can bring their campaigns to see which campaign and which campaigners have the broadest support
- This could provide lots of use cases for testing too, a whole test suite of user story fragments.
- The distiction between core data (in centralized databases, verified by "staff" or some trusted faction, and used to generate the Consumerium buying signal) and scatter data where the only required information to be inserted into the system would be the URI to fetch the data. This was sort of to allow free circulation of fiction as "external" or "scattered" XML could be delivered and parsed for the Consumer
- Distinguish presentation of information onto a program called the Consumer Agent from the storage
Later this "use XML for everything" was revised to store facts in a wiki called the Signal/Content Wiki to distinguish from this R&D wiki and store Campaign/Research/opinions in XML grammar called ConsuML for The Consumerium Exchange
Laziness and general lack of algorithmitically competent developers led to the UI being done by (X)HTML rather then the dedicated Consumer Agent
Even later on this was revised to utilise wiki code for almost everything as The Consumerium Exchange gave way for Opinion Wiki where opinions could persist and thru strict syntax rules data could be propagated into a view by code plugged into a modified mediawiki software
Recently the consumerium house trolls have been on the war path with Wikipedia that a long time ago was designated as a reference point for keeping facts as facts and not allowing every rumor to persist as a fact. If Wikimedia develops that discipline, that would be good, but it doesn't have it now. The GFDL text corpus as a whole still usually has one good article on any key issue. Trying to duplicate encyclopedia and dictionary functions would lead to information inflation wherein Consumerium loses it's trust it may sometime have had. The Simple English Wikipedia was a potential ally, then maybe not, and it's hard to tell where that project is going, since it is not using a vocabulary rich enough to discuss the kind of moral decision issues covered in the glossary. At least 2000 words of English are required for this, and "SEW" is now using only 1000, which is not even good enough for translation.
Also the implementation of the Python programming language on smartphones is directing our attention towards Alternate wiki-implementations, mainly MoinMoin which is modular in it's design and written in Python. It may be useful also to settle on one mobile device vendor at least for the pilot project.