Talk:Export-import: Difference between revisions

    From Consumerium development wiki R&D Wiki
    (is this worth pursuing? probably not - new software will be required, and it's probably custom)
     
    m (de-linking)
     
    (7 intermediate revisions by 5 users not shown)
    Line 2: Line 2:


    They haven't even cloned the [[GetWiki]] facility yet, and it's the ideal way to keep those who [[fork off]] the [[GFDL corpus]] as close to the core corpus as possible.  That is of course because they are trying to stop all other [[GFDL corpus access provider]]s, and retain [[trademark]] power over the name "wikipedia", which is actually generic.  There are many wikipedias, and the contributor is not seeking to enable or give their work to any specific bunch of "[[Wikimedia]]" thugs, they're seeking to give it to all wikipedias.
    They haven't even cloned the [[GetWiki]] facility yet, and it's the ideal way to keep those who [[fork off]] the [[GFDL corpus]] as close to the core corpus as possible.  That is of course because they are trying to stop all other [[GFDL corpus access provider]]s, and retain [[trademark]] power over the name "wikipedia", which is actually generic.  There are many wikipedias, and the contributor is not seeking to enable or give their work to any specific bunch of "[[Wikimedia]]" thugs, they're seeking to give it to all wikipedias.
    We should be more concerned with [[edits, votes and bets]] and how [[answer recommendation]] might move things from [[Research Wiki]] to [[Publish Wiki]].  Importing a lot of sysop-approved biased nonsense from [[Wikipedia]] should be low on our list of priorities.  Why not just use [[GetWiki]] and get it from [[Wikinfo]] instead?
    -----
    GetWiki discourages the construction of a proper fork by allowing users to fetch articles from Wikipedia on demand, whenever they access a page which doesn't exist. This means that a large proportion of the content hosted by a GetWiki site is actually controlled by Wikipedians. What's more, Wikimedia would be within its rights to cease service to any GetWiki site, leaving them out in the cold with a useless leech script. Why not just [http://download.wikimedia.org/ download the database] and end your dependence on Wikimedia? -- [[User:Tim Starling|Tim Starling]] 11:50, 23 Jun 2004 (EEST)
    :This is bullshit but it does prove [[Wikimedia]] is a menace to the [[GFDL corpus]].  Wikipedia is not "within its rights to cease service" under some reasonable interpretations of the [[GFDL]];  Since very few [[trolls]] are blocked in both places, the availability of current articles both ways is one way [[Wikimedia]] avoids being called on its frequent [[GFDL violation]]s.  It is easy enough to suck the appropriate articles in through various read-only proxies that the [[developer vigilantiism|vigilante]] [[usurper]]s don't know about, and never will know about.  They can't track all the tools trolls use.
    :As for "control", so what?  The point is that [[GFDL corpus access provider]]s can cooperate, so that anyone else could feed [[Wikinfo]] if [[Wikimedia]] cut it off fascistically.  That would put the new feeder in power position, as it could serve any other [[mirror web site]] that [[Wikimedia corruption]] deemed a threat to its monopoly.
    :[[Wikipedia]] unrighteously uses a mass of [[GFDL corpus]] content that was donated "to the GFDL itself" not "to Wikipedia" - no ownership rights were ever ceded to [[Wikimedia]] in particular, and even new contributions are not so deeded.  So the rights of those contributors and those who you call "wikipedians" are not the same thing, and attempts to make them the same thing are easy enough to slap down legally.  We're watching all your mistakes.
    ::We care very deeply about preserving the right to fork and the right to freely redistribute Wikipedia content. However our hardware resources are limited. We couldn't possibly serve someone trying to request hundreds of pages per second, although we'd be happy for them to obtain our content in a more orderly fashion. Similarly, we would prefer it if mirrors and forks would cache content locally rather than fetching it from Wikipedia on every client request. It is not against the GFDL to require that they do so. -- [[User:Tim Starling|Tim Starling]] 13:28, 24 Jun 2004 (EEST)
    :::This is exactly why we need to focus our efforts on creating a working [[Export-import]] functionality so that we can serve reasonably fresh content from many wikis without driving people to other wikis whenever they could stay within [[Publish Wiki]]. --[[User:Juxo|Juxo]] 14:03, 24 Jun 2004 (EEST)
    ::::"We couldn't possibly serve someone trying to request hundreds of pages per second," but this just doesn't happen nor could it ever.  A page requested via the so-called "leech facility" (though the correct term would be [[corpus import]] probably) would end up called up once and only referenced again if it was reviewed again - this can be cached without any great difficulty using off the shelf software.
    ::::Also, set up an [[independent board]] and lots of cash will flow in - none of which will come anywhere near as long as Bomis people run everything and practice [[Wikimedia corruption]]

    Latest revision as of 13:39, 9 September 2004

    What chance is there that MediaWiki will *ever* be useful for this purpose? Its developers are only interested in Wikipedia, and in increasing the power of their sysop power structure by adding on a permission-based model no one really needs, but which makes them feel powerful.

    They haven't even cloned the GetWiki facility yet, and it's the ideal way to keep those who fork off the GFDL corpus as close to the core corpus as possible. That is of course because they are trying to stop all other GFDL corpus access providers, and retain trademark power over the name "wikipedia", which is actually generic. There are many wikipedias, and the contributor is not seeking to enable or give their work to any specific bunch of "Wikimedia" thugs, they're seeking to give it to all wikipedias.

    We should be more concerned with edits, votes and bets and how answer recommendation might move things from Research Wiki to Publish Wiki. Importing a lot of sysop-approved biased nonsense from Wikipedia should be low on our list of priorities. Why not just use GetWiki and get it from Wikinfo instead?


    GetWiki discourages the construction of a proper fork by allowing users to fetch articles from Wikipedia on demand, whenever they access a page which doesn't exist. This means that a large proportion of the content hosted by a GetWiki site is actually controlled by Wikipedians. What's more, Wikimedia would be within its rights to cease service to any GetWiki site, leaving them out in the cold with a useless leech script. Why not just download the database and end your dependence on Wikimedia? -- Tim Starling 11:50, 23 Jun 2004 (EEST)

    This is bullshit but it does prove Wikimedia is a menace to the GFDL corpus. Wikipedia is not "within its rights to cease service" under some reasonable interpretations of the GFDL; Since very few trolls are blocked in both places, the availability of current articles both ways is one way Wikimedia avoids being called on its frequent GFDL violations. It is easy enough to suck the appropriate articles in through various read-only proxies that the vigilante usurpers don't know about, and never will know about. They can't track all the tools trolls use.
    As for "control", so what? The point is that GFDL corpus access providers can cooperate, so that anyone else could feed Wikinfo if Wikimedia cut it off fascistically. That would put the new feeder in power position, as it could serve any other mirror web site that Wikimedia corruption deemed a threat to its monopoly.
    Wikipedia unrighteously uses a mass of GFDL corpus content that was donated "to the GFDL itself" not "to Wikipedia" - no ownership rights were ever ceded to Wikimedia in particular, and even new contributions are not so deeded. So the rights of those contributors and those who you call "wikipedians" are not the same thing, and attempts to make them the same thing are easy enough to slap down legally. We're watching all your mistakes.
    We care very deeply about preserving the right to fork and the right to freely redistribute Wikipedia content. However our hardware resources are limited. We couldn't possibly serve someone trying to request hundreds of pages per second, although we'd be happy for them to obtain our content in a more orderly fashion. Similarly, we would prefer it if mirrors and forks would cache content locally rather than fetching it from Wikipedia on every client request. It is not against the GFDL to require that they do so. -- Tim Starling 13:28, 24 Jun 2004 (EEST)
    This is exactly why we need to focus our efforts on creating a working Export-import functionality so that we can serve reasonably fresh content from many wikis without driving people to other wikis whenever they could stay within Publish Wiki. --Juxo 14:03, 24 Jun 2004 (EEST)
    "We couldn't possibly serve someone trying to request hundreds of pages per second," but this just doesn't happen nor could it ever. A page requested via the so-called "leech facility" (though the correct term would be corpus import probably) would end up called up once and only referenced again if it was reviewed again - this can be cached without any great difficulty using off the shelf software.
    Also, set up an independent board and lots of cash will flow in - none of which will come anywhere near as long as Bomis people run everything and practice Wikimedia corruption