Priorities selections: Difference between revisions

    From Consumerium development wiki R&D Wiki
    (+ == Useful in the future == * Automate backups intelligently.)
     
    (4 intermediate revisions by the same user not shown)
    Line 1: Line 1:
    This is about figuring out what needs to work at what point in time and how the graph goes.
    This is about figuring out what needs to work at what point in time and how the graph goes.


    == Top priority ==
    == Prioritized / Do first items ==
    * Unified Login ([[Wikimedia]] flavour)
    * [[w:Wikipedia:Unified login]] ([[Wikimedia]] flavour). Master database should probably be the Consumerium Commons.
     
    * [[Semantic MediaWiki]] for implementation stage wiki. First in English and as we go along and figure it out we can probably add other languages to the semantic input capable so we don't have a horrible [[APOV]] (Anglophone Point Of View). I think the trollz called it [[EPOV]], yep.. should be ESPOV (English Speaking Point Of View).
    * [[Semantic MediaWiki]] for implementation stage wiki. First in English and as we go along and figure it out we can probably add other languages to the semantic input capable so we don't have a horrible [[APOV]] (Anglophone Point Of View). I think the trollz called it [[EPOV]], yep.. should be ESPOV (English Speaking Point Of View).
    * RDF data mirroring
     
    * [[Consumerium Commons]] central uploadery (Wikimedia flavour) and mala fide checking [[controll]] for media. (Consumerium style)
    * [[Consumerium Commons]] central uploadery (Wikimedia flavour) and mala fide checking [[controll]] for media. (Consumerium style)
    * RDF data ontology acquisition and mirroring the various data sources into local databases quite rapidly for development, efficiency and [[ecology]] reasons.
    * Install a [[copyleft]] [[Databases#Graph databases|graph database]] and a [[Databases#Subject-predicate-object databases|subject-predicate-object database (aka. triplestore)]]. Data can be quite freely moved between these. Install other or same on non-production machines for database dump verification and benchmarking purposes.
    * Install a [[copyleft]] [[Databases#Graph databases|graph database]] and a [[Databases#Subject-predicate-object databases|subject-predicate-object database (aka. triplestore)]]. Data can be quite freely moved between these. Install other or same on non-production machines for database dump verification and benchmarking purposes.


    * Public [[SPARQL]] endpoint of the public database
    * Open the first public [[SPARQL]] endpoint onto the public database (or possibly a curated database)


    == Important in future ==
    == Important in future ==
    * New [[language]]s introduced as there is interest. [[Language vs. Area]] resolved gradually by adapting best and good practices as we go about this.  
    * New [[language]]s introduced as there is interest. [[Language vs. Area]] resolved gradually by adapting best and good practices as we go about this.  
    * Establish security framework for storing secret data
     
    * Establish security framework for storing secret data. Obviously some data will need to be protected against unwanted access. Sometimes that includes the [[Lowest Troll]] and other people who may be assigned technical privileges.
     
    == Useful in the future ==
    * Automate backups intelligently.
     
    == Do very carefully once sure we know we got it right ==
    * [[Voting]] (totally)


    == Low priority in the future ==
    == Low priority in the future ==

    Latest revision as of 16:30, 1 September 2016

    This is about figuring out what needs to work at what point in time and how the graph goes.

    Prioritized / Do first items[edit | edit source]

    • Semantic MediaWiki for implementation stage wiki. First in English and as we go along and figure it out we can probably add other languages to the semantic input capable so we don't have a horrible APOV (Anglophone Point Of View). I think the trollz called it EPOV, yep.. should be ESPOV (English Speaking Point Of View).
    • RDF data ontology acquisition and mirroring the various data sources into local databases quite rapidly for development, efficiency and ecology reasons.
    • Open the first public SPARQL endpoint onto the public database (or possibly a curated database)

    Important in future[edit | edit source]

    • New languages introduced as there is interest. Language vs. Area resolved gradually by adapting best and good practices as we go about this.
    • Establish security framework for storing secret data. Obviously some data will need to be protected against unwanted access. Sometimes that includes the Lowest Troll and other people who may be assigned technical privileges.

    Useful in the future[edit | edit source]

    • Automate backups intelligently.

    Do very carefully once sure we know we got it right[edit | edit source]

    Low priority in the future[edit | edit source]

    See also[edit | edit source]