Priorities selections: Difference between revisions

    From Consumerium development wiki R&D Wiki
    (some linking)
    (some secrets need to be technically hidden from unwanted access (future issue))
    Line 1: Line 1:
    This is about figuring out what needs to work at what point in time and how the graph goes.
    This is about figuring out what needs to work at what point in time and how the graph goes.


    == Top priority ==
    == Prioritized / Do first items ==
    * [[w:Wikipedia:Unified login]] ([[Wikimedia]] flavour). Master database should probably be the Consumerium Commons.
    * [[w:Wikipedia:Unified login]] ([[Wikimedia]] flavour). Master database should probably be the Consumerium Commons.


    * [[Semantic MediaWiki]] for implementation stage wiki. First in English and as we go along and figure it out we can probably add other languages to the semantic input capable so we don't have a horrible [[APOV]] (Anglophone Point Of View). I think the trollz called it [[EPOV]], yep.. should be ESPOV (English Speaking Point Of View).
    * [[Semantic MediaWiki]] for implementation stage wiki. First in English and as we go along and figure it out we can probably add other languages to the semantic input capable so we don't have a horrible [[APOV]] (Anglophone Point Of View). I think the trollz called it [[EPOV]], yep.. should be ESPOV (English Speaking Point Of View).
    * [[Consumerium Commons]] central uploadery (Wikimedia flavour) and mala fide checking [[controll]] for media. (Consumerium style)


    * RDF data ontology acquisition and mirroring the various data sources into local databases quite rapidly for development, efficiency and [[ecology]] reasons.
    * RDF data ontology acquisition and mirroring the various data sources into local databases quite rapidly for development, efficiency and [[ecology]] reasons.
    * [[Consumerium Commons]] central uploadery (Wikimedia flavour) and mala fide checking [[controll]] for media. (Consumerium style)


    * Install a [[copyleft]] [[Databases#Graph databases|graph database]] and a [[Databases#Subject-predicate-object databases|subject-predicate-object database (aka. triplestore)]]. Data can be quite freely moved between these. Install other or same on non-production machines for database dump verification and benchmarking purposes.
    * Install a [[copyleft]] [[Databases#Graph databases|graph database]] and a [[Databases#Subject-predicate-object databases|subject-predicate-object database (aka. triplestore)]]. Data can be quite freely moved between these. Install other or same on non-production machines for database dump verification and benchmarking purposes.


    * Public [[SPARQL]] endpoint of the public database
    * Open the first public [[SPARQL]] endpoint onto the public database (or possibly a curated database)


    == Important in future ==
    == Important in future ==
    * New [[language]]s introduced as there is interest. [[Language vs. Area]] resolved gradually by adapting best and good practices as we go about this.  
    * New [[language]]s introduced as there is interest. [[Language vs. Area]] resolved gradually by adapting best and good practices as we go about this.  
    * Establish security framework for storing secret data
     
    * Establish security framework for storing secret data. Obviously some data will need to be protected against unwanted access. Sometimes that includes the [[Lowest Troll]] and other people who may be assigned technical privileges.


    == Low priority in the future ==
    == Low priority in the future ==

    Revision as of 15:37, 1 September 2016

    This is about figuring out what needs to work at what point in time and how the graph goes.

    Prioritized / Do first items

    • Semantic MediaWiki for implementation stage wiki. First in English and as we go along and figure it out we can probably add other languages to the semantic input capable so we don't have a horrible APOV (Anglophone Point Of View). I think the trollz called it EPOV, yep.. should be ESPOV (English Speaking Point Of View).
    • RDF data ontology acquisition and mirroring the various data sources into local databases quite rapidly for development, efficiency and ecology reasons.
    • Open the first public SPARQL endpoint onto the public database (or possibly a curated database)

    Important in future

    • New languages introduced as there is interest. Language vs. Area resolved gradually by adapting best and good practices as we go about this.
    • Establish security framework for storing secret data. Obviously some data will need to be protected against unwanted access. Sometimes that includes the Lowest Troll and other people who may be assigned technical privileges.

    Low priority in the future

    See also