Editing Talk:Export-import

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.

Latest revision Your text
Line 2: Line 2:


They haven't even cloned the [[GetWiki]] facility yet, and it's the ideal way to keep those who [[fork off]] the [[GFDL corpus]] as close to the core corpus as possible.  That is of course because they are trying to stop all other [[GFDL corpus access provider]]s, and retain [[trademark]] power over the name "wikipedia", which is actually generic.  There are many wikipedias, and the contributor is not seeking to enable or give their work to any specific bunch of "[[Wikimedia]]" thugs, they're seeking to give it to all wikipedias.
They haven't even cloned the [[GetWiki]] facility yet, and it's the ideal way to keep those who [[fork off]] the [[GFDL corpus]] as close to the core corpus as possible.  That is of course because they are trying to stop all other [[GFDL corpus access provider]]s, and retain [[trademark]] power over the name "wikipedia", which is actually generic.  There are many wikipedias, and the contributor is not seeking to enable or give their work to any specific bunch of "[[Wikimedia]]" thugs, they're seeking to give it to all wikipedias.
We should be more concerned with [[edits, votes and bets]] and how [[answer recommendation]] might move things from [[Research Wiki]] to [[Publish Wiki]].  Importing a lot of sysop-approved biased nonsense from [[Wikipedia]] should be low on our list of priorities.  Why not just use [[GetWiki]] and get it from [[Wikinfo]] instead?
-----
GetWiki discourages the construction of a proper fork by allowing users to fetch articles from Wikipedia on demand, whenever they access a page which doesn't exist. This means that a large proportion of the content hosted by a GetWiki site is actually controlled by Wikipedians. What's more, Wikimedia would be within its rights to cease service to any GetWiki site, leaving them out in the cold with a useless leech script. Why not just [http://download.wikimedia.org/ download the database] and end your dependence on Wikimedia? -- [[User:Tim Starling|Tim Starling]] 11:50, 23 Jun 2004 (EEST)
:This is bullshit but it does prove [[Wikimedia]] is a menace to the [[GFDL corpus]].  Wikipedia is not "within its rights to cease service" under some reasonable interpretations of the [[GFDL]];  Since very few [[trolls]] are blocked in both places, the availability of current articles both ways is one way [[Wikimedia]] avoids being called on its frequent [[GFDL violation]]s.  It is easy enough to suck the appropriate articles in through various read-only proxies that the [[developer vigilantiism|vigilante]] [[usurper]]s don't know about, and never will know about.  They can't track all the tools trolls use.
:As for "control", so what?  The point is that [[GFDL corpus access provider]]s can cooperate, so that anyone else could feed [[Wikinfo]] if [[Wikimedia]] cut it off fascistically.  That would put the new feeder in power position, as it could serve any other [[mirror web site]] that [[Wikimedia corruption]] deemed a threat to its monopoly.
:[[Wikipedia]] unrighteously uses a mass of [[GFDL corpus]] content that was donated "to the GFDL itself" not "to Wikipedia" - no ownership rights were ever ceded to [[Wikimedia]] in particular, and even new contributions are not so deeded.  So the rights of those contributors and those who you call "wikipedians" are not the same thing, and attempts to make them the same thing are easy enough to slap down legally.  We're watching all your mistakes.
::We care very deeply about preserving the right to fork and the right to freely redistribute Wikipedia content. However our hardware resources are limited. We couldn't possibly serve someone trying to request hundreds of pages per second, although we'd be happy for them to obtain our content in a more orderly fashion. Similarly, we would prefer it if mirrors and forks would cache content locally rather than fetching it from Wikipedia on every client request. It is not against the GFDL to require that they do so. -- [[User:Tim Starling|Tim Starling]] 13:28, 24 Jun 2004 (EEST)
:::This is exactly why we need to focus our efforts on creating a working [[Export-import]] functionality so that we can serve reasonably fresh content from many wikis without driving people to other wikis whenever they could stay within [[Publish Wiki]]. --[[User:Juxo|Juxo]] 14:03, 24 Jun 2004 (EEST)
::::"We couldn't possibly serve someone trying to request hundreds of pages per second," but this just doesn't happen nor could it ever.  A page requested via the so-called "leech facility" (though the correct term would be [[corpus import]] probably) would end up called up once and only referenced again if it was reviewed again - this can be cached without any great difficulty using off the shelf software.
::::Also, set up an [[independent board]] and lots of cash will flow in - none of which will come anywhere near as long as Bomis people run everything and practice [[Wikimedia corruption]]
Please note that all contributions to Consumerium development wiki are considered to be released under the GNU Free Documentation License 1.3 or later (see Consumerium:Copyrights for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource. Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel Editing help (opens in new window)