Please sign and share the petition 'Tighten regulation on taking, making and faking explicit images' at Change.org initiated by Helen Mort to the w:Law Commission (England and Wales) to properly update UK laws against synthetic filth. Only name and email required to support, no nationality requirement. See Current and possible laws and their application @ #SSF! wiki for more info on the struggle for laws to protect humans.

Metaweb

From Consumerium development wiki R&D Wiki
Revision as of 06:11, 12 March 2004 by 142.177.93.81 (talk) (updating)
Jump to navigation Jump to search

Metaweb is visible at http://www.metaweb.com

It is an experimental wiki that may become a large public wiki. Like Consumerium it is using the mediawiki software in its R&D phase - and also like Consumerium it intends to create tools to work with material that is in some Wikitext standard prototype format with well developed multi-language conventions, most likely based on that now used in Wikipedia.

It is very clear that they intend NOT to use mediawiki or perl exclusively, or maybe at all, in the long term - they have defined an intermediate page format which is explicitly designed for their own custom software to suck in and generate a semantic web. Which is something the Signal Wiki (a semantic web itself, segmented maybe by faction) must be able to do to the Research Wiki as a pre-step to affecting the Consumerium buying signal. So their ultimate solution might also be ours, if we can co-operate with them early enough and use the same tools base.

A first step to this might be to simply adopt a close enough variation of their intermediate page format (with different section titles probably, we don't want a "Stephensonia" section) that we can use their tools here with few adaptations. See Consumerium:intermediate page format for the abstract, and Consumerium:intermediate page for lists of types of such pages here.

They are also seemingly taking the lead in thinking about how raw wikitext and ConsuML will be translated into an XML-like semantic web. An XML dump might not be page by page and strictly marked up for style, but, might actually be whole topic areas at once, or all factionally defined terms unique to one faction in our application. Being able perhaps to re-integrate the semantic web after multiple parties have edited the different factional sections independently... all up for grabs. Again, we should follow their lead, as this is what they plan to do in general.

It is not clear how they plan to deal with licensing, but they are presently GFDL and seem to be quite aware that mediawiki isn't capable of really implementing this license, nor supporting advanced GFDL capabilities like Invariant Sections that would be required for any kind of certification or validation of article versions. GetWiki may be better, but it's not clear.

The project includes some major brains like Danny Hillis and Neal Stephenson, and seems troll friendly enough at the moment to make it possible to at least introduce the correct bridging ideas into both projects.