https://develop.consumerium.org/w/api.php?action=feedcontributions&user=Tim+Starling&feedformat=atomConsumerium development wiki - User contributions [en]2024-03-29T05:31:46ZUser contributionsMediaWiki 1.39.6https://develop.consumerium.org/w/index.php?title=User:Jukeboksi/Blog/September2004&diff=5106User:Jukeboksi/Blog/September20042004-09-08T05:53:34Z<p>Tim Starling: answer the question</p>
<hr />
<div>7.9.2004<br />
<br />
Today I've been mostly organizing my study notes and editing [[wikipedia]], mainly adding categories toa business and economics articles.<br />
----<br />
6.9.2004<br />
<br />
Last weekend I got my move to [[w:Tampere|Tampere]] mostly finalized though most of the stuff is now around my room in boxes and bags and I have to sort them out and find a place for all stuff. I should be getting [[w:ADSL|ADSL]] access in one to two weeks time which will help me contribute more<br />
<br />
Yesterday I had a meetup with [[User:Linkola|Linkola]] who is working on his doctorate in http://www.uiah.fi . I'm half way reading through his masters thesis and will write a brief summary of it here. It contains lots of interesting research information about [[consumer]] wishes, hopes, fears and practical information about '''how''' consumers use the information about [[product]]s supplied to them currently. <br />
<br />
:It's very easy to study what they look at. It's very hard to study how it affects them. Focus on [[price premium]] perhaps as the indicator that can be made objective? That is, someone buys the [[green light]] product even though it costs 10 Eurocents more than the [[red light]] - but if it costs 15 they do not buy either, or, they actually buy the red light product. That's the kind of data you need to determine what the actual willingness of people to pay more to satisfy [[individual buying criteria]] is.<br />
<br />
He renewed his commitment to become one of the founding members of Consumerium Association of Finland. He was most interested in getting a [[pilot project]] hastily off the ground and to get to analysing '''how and what information consumers use for making decicions''', so I'm betting that he would be very interested in [[link transit]] data which has been a hot potato around here for quite some while now. <br />
<br />
:Yes, clearly it's of even more use to [[Consumerium Services]] or other serious [[wiki mission]]s than to those [[Wikimedia|bogus pseudo-encyclopedists]] who don't even understand why it's important, or pretend not to (more likely unless they are stupid).<br />
<br />
::If it's so obvious why link transit data is valuable, then why haven't any of the users requested it? Just answer my question on [[Talk:Link transit]]. If you're so arrogant that you refuse to my question on the grounds that I am "too stupid", why should you expect me to do this work for you? -- [[User:Tim Starling|Tim Starling]] 08:49, 8 Sep 2004 (EEST)<br />
<br />
He also expressed his view that we should start with a limited group of users, but I countered that with the fact that we have for a long time been planning for a system that is accessible to everyone without any limitations of scope as to users. <br />
<br />
:It is obviously better to have wide-open editorial policy that is [[troll-friendly]] and immune to [[propaganda]] or takeover by any one [[faction]] - else who would trust the data? This is just typical academic belief on his part, that somehow cliques can be made trustworthy. It's fairly obvious that without the wide-open policy, none of the design work would have gotten done.<br />
<br />
I understand that having one single group (ie. the members of some association with interest in these things we are to be dealing with) would make for better research material for his doctorate if he decides to include Consumerium in some way in his post-graduate studies. The discussion we had was intense and I found it very pleasing to actually get to talk about these things face-to-face and not via [[wiki]].<br />
<br />
:It would be useful to support a doctoral study on [[moral purchasing]] but we really need a [[Research Wiki]] to actually start to compile [[intermediate page]]s on all the things we care about. We are long past due to do that, and other projects are passing us. We can't rely on [[CorpKnowPedia]] and [[Consumerpedia]] and [[Wikipedia]] and [[Disinfopedia]] to track these things, though, we might from time to time rely on information from all of them. [[Wikinfo]] might be useful to track [[sympathetic point of view]] of various movements like [[no old growth]] or [[dolphin free]], but, not to track corps, since the [[Coca-Cola]] article there must be sympathetic to Coca-Cola! So we have a niche to fill that has not been filled. Perhaps work with [[Indymedia]] on this, as they expose corporate misbehaviour a lot?<br />
<br />
----<br />
2.9.2004<br />
<br />
[http://test.wikipedia.org/w/index.php?title=Main_Page&action=validate&timestamp=20040810005530 Here's] an interesting piece of code developed by Magnus Manske some while ago. It's not used on [[Wikipedia]] but could be very useful for the [[Consumerium Process]]. It allows users to flag some article as validated on a number of issues ie. ''style, legal, completeness, facts, suitability for "final" release ([[Publish Wiki]] in our case)''<br />
<br />
Thanks to [[User:TimStarling]] for pointing out this code. I queried him about Magnuses' code for custom meta-tags such as "no index" (or whatever it's called) because if we could easily and reliably control what gets indexed by search engines and what not we could do with a unified [[Research Wiki]] and [[Publish Wiki]] where articles flagged indexable would be considered "published" and those with "no index" to be still in research stage. Just a thought. Apparently Magnuses' code does not include tags for robots but according to Tim this would not be difficult to implement. The main problem being that once an article is indexed and then some seedy characters add questionable content how does one get Google etc. to stop indexing it. Apparently there is yet no way to remove pages from search engines on request.<br />
<br />
[http://www.textually.org/picturephoning/archives/002729.htm Ericsson and ScanBuy] working on including [[barcode]] capture properties with [http://www.scanzoom.com/ ScanZoom] technology for [[Ericsson]] [[Hardware|camera phones]].<br />
----<br />
1.9.2004<br />
<br />
I'm currently reading the master's thesis of Jouni Linkola, "Shopping Guide to The Future", which is available (in Finnish) at http://mlab.uiah.fi/5medialaunch/jlinkola_lopputyo.pdf there is also a visualization of some main aspects of it at http://personal.inet.fi/surf/graphic/future.html (in Finnish again). The visualization is quite similar to the original [[Motivation]] of [[Consumerium:Itself]]. I'm looking forward to meeting up with Jouni to discuss the synergies between his post-graduate studies apparently also focusing on information services for consumers to be more informed and empowered.<br />
<br />
Also check out the cool [[Consumeter]] shopping bag at http://www.cc.jyu.fi/~antello/designfiction/pictures.htm Watch the associated videos as well if you have broadband.<br />
<br />
:[[design fiction]] is a [[Good Thing]], that's what [[visions]] and [[best cases]] are about ultimately; and [[free circulation of fiction]] is better still.<br />
<br />
:Can [[Consumerium:We]] get Linkola to contribute to [[visions]] and [[best cases]]? Also he might have insight into [[worst cases]], but more likely we have thought through that more. We need one unified [[design fiction]] effort to figure out what our priorities are, and where we're going technically in the long run.</div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=User:Jukeboksi/Blog/September2004&diff=5105User:Jukeboksi/Blog/September20042004-09-08T05:49:34Z<p>Tim Starling: why haven't they requested it?</p>
<hr />
<div>7.9.2004<br />
<br />
Today I've been mostly organizing my study notes and editing [[wikipedia]], mainly adding categories toa business and economics articles.<br />
----<br />
6.9.2004<br />
<br />
Last weekend I got my move to [[w:Tampere|Tampere]] mostly finalized though most of the stuff is now around my room in boxes and bags and I have to sort them out and find a place for all stuff. I should be getting [[w:ADSL|ADSL]] access in one to two weeks time which will help me contribute more<br />
<br />
Yesterday I had a meetup with [[User:Linkola|Linkola]] who is working on his doctorate in http://www.uiah.fi . I'm half way reading through his masters thesis and will write a brief summary of it here. It contains lots of interesting research information about [[consumer]] wishes, hopes, fears and practical information about '''how''' consumers use the information about [[product]]s supplied to them currently. <br />
<br />
:It's very easy to study what they look at. It's very hard to study how it affects them. Focus on [[price premium]] perhaps as the indicator that can be made objective? That is, someone buys the [[green light]] product even though it costs 10 Eurocents more than the [[red light]] - but if it costs 15 they do not buy either, or, they actually buy the red light product. That's the kind of data you need to determine what the actual willingness of people to pay more to satisfy [[individual buying criteria]] is.<br />
<br />
He renewed his commitment to become one of the founding members of Consumerium Association of Finland. He was most interested in getting a [[pilot project]] hastily off the ground and to get to analysing '''how and what information consumers use for making decicions''', so I'm betting that he would be very interested in [[link transit]] data which has been a hot potato around here for quite some while now. <br />
<br />
:Yes, clearly it's of even more use to [[Consumerium Services]] or other serious [[wiki mission]]s than to those [[Wikimedia|bogus pseudo-encyclopedists]] who don't even understand why it's important, or pretend not to (more likely unless they are stupid).<br />
<br />
::If it's so obvious why link transit data is valuable, then why haven't any of the users requested it? -- [[User:Tim Starling|Tim Starling]] 08:49, 8 Sep 2004 (EEST)<br />
<br />
He also expressed his view that we should start with a limited group of users, but I countered that with the fact that we have for a long time been planning for a system that is accessible to everyone without any limitations of scope as to users. <br />
<br />
:It is obviously better to have wide-open editorial policy that is [[troll-friendly]] and immune to [[propaganda]] or takeover by any one [[faction]] - else who would trust the data? This is just typical academic belief on his part, that somehow cliques can be made trustworthy. It's fairly obvious that without the wide-open policy, none of the design work would have gotten done.<br />
<br />
I understand that having one single group (ie. the members of some association with interest in these things we are to be dealing with) would make for better research material for his doctorate if he decides to include Consumerium in some way in his post-graduate studies. The discussion we had was intense and I found it very pleasing to actually get to talk about these things face-to-face and not via [[wiki]].<br />
<br />
:It would be useful to support a doctoral study on [[moral purchasing]] but we really need a [[Research Wiki]] to actually start to compile [[intermediate page]]s on all the things we care about. We are long past due to do that, and other projects are passing us. We can't rely on [[CorpKnowPedia]] and [[Consumerpedia]] and [[Wikipedia]] and [[Disinfopedia]] to track these things, though, we might from time to time rely on information from all of them. [[Wikinfo]] might be useful to track [[sympathetic point of view]] of various movements like [[no old growth]] or [[dolphin free]], but, not to track corps, since the [[Coca-Cola]] article there must be sympathetic to Coca-Cola! So we have a niche to fill that has not been filled. Perhaps work with [[Indymedia]] on this, as they expose corporate misbehaviour a lot?<br />
<br />
----<br />
2.9.2004<br />
<br />
[http://test.wikipedia.org/w/index.php?title=Main_Page&action=validate&timestamp=20040810005530 Here's] an interesting piece of code developed by Magnus Manske some while ago. It's not used on [[Wikipedia]] but could be very useful for the [[Consumerium Process]]. It allows users to flag some article as validated on a number of issues ie. ''style, legal, completeness, facts, suitability for "final" release ([[Publish Wiki]] in our case)''<br />
<br />
Thanks to [[User:TimStarling]] for pointing out this code. I queried him about Magnuses' code for custom meta-tags such as "no index" (or whatever it's called) because if we could easily and reliably control what gets indexed by search engines and what not we could do with a unified [[Research Wiki]] and [[Publish Wiki]] where articles flagged indexable would be considered "published" and those with "no index" to be still in research stage. Just a thought. Apparently Magnuses' code does not include tags for robots but according to Tim this would not be difficult to implement. The main problem being that once an article is indexed and then some seedy characters add questionable content how does one get Google etc. to stop indexing it. Apparently there is yet no way to remove pages from search engines on request.<br />
<br />
[http://www.textually.org/picturephoning/archives/002729.htm Ericsson and ScanBuy] working on including [[barcode]] capture properties with [http://www.scanzoom.com/ ScanZoom] technology for [[Ericsson]] [[Hardware|camera phones]].<br />
----<br />
1.9.2004<br />
<br />
I'm currently reading the master's thesis of Jouni Linkola, "Shopping Guide to The Future", which is available (in Finnish) at http://mlab.uiah.fi/5medialaunch/jlinkola_lopputyo.pdf there is also a visualization of some main aspects of it at http://personal.inet.fi/surf/graphic/future.html (in Finnish again). The visualization is quite similar to the original [[Motivation]] of [[Consumerium:Itself]]. I'm looking forward to meeting up with Jouni to discuss the synergies between his post-graduate studies apparently also focusing on information services for consumers to be more informed and empowered.<br />
<br />
Also check out the cool [[Consumeter]] shopping bag at http://www.cc.jyu.fi/~antello/designfiction/pictures.htm Watch the associated videos as well if you have broadband.<br />
<br />
:[[design fiction]] is a [[Good Thing]], that's what [[visions]] and [[best cases]] are about ultimately; and [[free circulation of fiction]] is better still.<br />
<br />
:Can [[Consumerium:We]] get Linkola to contribute to [[visions]] and [[best cases]]? Also he might have insight into [[worst cases]], but more likely we have thought through that more. We need one unified [[design fiction]] effort to figure out what our priorities are, and where we're going technically in the long run.</div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=Avoid_building_metaphor&diff=16032Avoid building metaphor2004-09-07T07:51:13Z<p>Tim Starling: moved to "Avoid_the_building_metaphor"</p>
<hr />
<div>#REDIRECT [[Avoid_the_building_metaphor]]<br />
</div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=Talk:Link_transit&diff=5107Talk:Link transit2004-09-06T03:30:17Z<p>Tim Starling: pretty graph pictures?</p>
<hr />
<div>I'm not sure if our apache is configured to even log internal clicks. Maybe, maybe not. Anyways I don't currently have any software to report or analyze this information. If you know of GPL or otherwise free tools for digging this information from [[httpd log]]s pls post here --[[User:Juxo|Juxo]] 16:47, 1 Sep 2004 (EEST)<br />
<br />
:This tends to be expensive software run by [[ad server]] companies. But it is certainly in use in all [[publicly traded search engine]]s like [[Yahoo]] and [[Google]], in fact, you can see the "imgurl" they use to track say which queries led to which image lookups.<br />
<br />
----<br />
<br />
I wrote up a basic program to perform this kind of analysis on log files, but I'm not sure why you think it would be useful for either contributors or Bomis. It's certainly not a commonly requested feature. Wouldn't view count data be more useful than link transit data? <br />
<br />
:Both are useful for the same reasons. And both are not available. "Server load" is a lousy excuse, when [[Wikimedia]] could raise all the money it needed for hardware with an [[independent board]].<br />
<br />
::You say both are useful for the same reasons. What reasons are those? -- [[User:Tim Starling|Tim Starling]] 07:13, 4 Sep 2004 (EEST)<br />
<br />
:::A serious [[encyclopedia]] or any [[journal]] would care which pages were reviewed, and which were reviewed from which others, and how often, and what connections were of interest to readers. It's kind of embarassing to have to say that out loud.<br />
<br />
::::Of course we care about review, but wouldn't that be better served by popularity data than link transit data? What would you do with link transit information? How do you "elaborate" a link? The best use of it I can think of is to pick a small set of related articles, and draw pretty graph pictures. A noble goal, to be sure, but it would require a change to the program below to generate such data efficiently. -- [[User:Tim Starling|Tim Starling]] 06:30, 6 Sep 2004 (EEST)<br />
<br />
This matters because I need to know what the output format should be, and I need to have some way to justify using server resources to generate such data. <br />
<br />
:A map of nodes/pages with the number of [[link transit]]s on each edge, such edges representing a link, is the obvious display. But that would be huge so one must be able to filter down to a very small number of pages and links that hold them together, typically the most heavily clustered / deeply connected to each other. [[Xerox PARC]] did some research on this about twenty years ago.<br />
<br />
Anyway, following is the result of a couple of hours of procrastination. -- [[User:Tim Starling|Tim Starling]] 11:26, 3 Sep 2004 (EEST)<br />
<br />
:It looks like C to me and it looks like that the Main() takes standard httpd.log as input. I'll run this on our logs sometime when I have the time. Kinda busy now. --[[User:Juxo|Juxo]] 13:02, 3 Sep 2004 (EEST)<br />
<br />
::It's C++. It outputs two sections separated by a double linefeed. The first is an indexed list of URLs. The second has three values on each line: index from, index to and the transit count. The idea is that you would read all this into a relational database with an index on all three columns, then perform whatever analysis you need to perform. -- [[User:Tim Starling|Tim Starling]] 07:13, 4 Sep 2004 (EEST)<br />
<br />
<pre><br />
#include <string><br />
#include <iostream><br />
#include <vector><br />
#include <map><br />
using namespace std;<br />
<br />
#define LINE_BUF_SIZE 1000<br />
#define REPORTING_INTERVAL 10000<br />
<br />
int getUrlIndex(char* s);<br />
<br />
class char_order<br />
{<br />
public:<br />
bool operator()(char* s1, char* s2) <br />
{<br />
return strcmp(s1, s2) < 0;<br />
}<br />
};<br />
<br />
typedef map<char*, int, char_order> char_map;<br />
<br />
typedef char_map::iterator hash_iterator;<br />
typedef vector<map<int, int> >::iterator vectormap_outer_iterator;<br />
typedef map<int, int>::iterator vectormap_inner_iterator;<br />
<br />
vector<map<int, int> > outbound;<br />
vector<char*> urls;<br />
char_map urlHash;<br />
<br />
int main(int argc, char** argv) {<br />
FILE* file;<br />
if (argc == 1) {<br />
file = stdin;<br />
} else if (argc == 2) {<br />
file = fopen(argv[1], "r");<br />
if (!file) {<br />
printf("Can't open file %s\n", argv[1]);<br />
return 1;<br />
}<br />
} else {<br />
printf("Incorrect argument count\n");<br />
return 1;<br />
}<br />
<br />
<br />
char buffer[LINE_BUF_SIZE];<br />
int numLines = 0;<br />
<br />
while (!feof(file)) {<br />
numLines = (numLines+1)%REPORTING_INTERVAL;<br />
if (numLines == 0) {<br />
fprintf(stderr, ".");<br />
fflush(stderr);<br />
}<br />
if (!fgets(buffer, LINE_BUF_SIZE-1, file)) {<br />
break;<br />
}<br />
<br />
// Find start of quoted method/URL string<br />
char* method = strchr(buffer, '"');<br />
if (!method) {<br />
continue;<br />
}<br />
method++;<br />
<br />
// Find end of method, and start of URL<br />
char* url = strchr(method, ' ');<br />
if (!url) {<br />
continue;<br />
}<br />
*url = '\0';<br />
url++;<br />
<br />
// Find end of URL<br />
char* referrer = strchr(url, ' ');<br />
if (!url) {<br />
continue;<br />
}<br />
*referrer = '\0';<br />
referrer++;<br />
<br />
// If URL does not contain "wiki", skip<br />
if (strstr(url,"/wiki/") == NULL) {<br />
continue;<br />
}<br />
<br />
// Find start of referrer<br />
referrer = strstr(referrer, " \"");<br />
if (!referrer) {<br />
continue;<br />
}<br />
referrer += 2;<br />
<br />
// Find end of referrer<br />
char* end = strchr(referrer, '"');<br />
if (!end) {<br />
continue;<br />
}<br />
*end = '\0';<br />
<br />
// Obtain indexes<br />
int from = getUrlIndex(referrer);<br />
int to = getUrlIndex(url);<br />
<br />
// Add to matrix<br />
if (outbound.size() < from+1) {<br />
outbound.resize(from+1);<br />
}<br />
outbound[from][to]++;<br />
}<br />
<br />
// Output URLs<br />
int numUrls = urls.size();<br />
for (int i=0; i<numUrls; i++) {<br />
printf("%d\t%s\n", i, urls[i]);<br />
delete[] urls[i];<br />
}<br />
printf("\n");<br />
for (int i=0; i<outbound.size(); i++) {<br />
map<int,int> & row = outbound[i]; <br />
for (vectormap_inner_iterator j=row.begin(); j!=row.end(); j++) {<br />
printf("%d\t%d\t%d\n", i, j->first, j->second);<br />
}<br />
}<br />
return 0;<br />
}<br />
<br />
<br />
int getUrlIndex(char* s)<br />
{<br />
int index;<br />
hash_iterator iter = urlHash.find(s);<br />
if (iter != urlHash.end()) {<br />
index = iter->second;<br />
} else {<br />
// Copy string to the heap<br />
int length = strlen(s)+1;<br />
char* newMem = new char[length];<br />
memcpy(newMem, s, length);<br />
<br />
// Add to the containers<br />
urls.push_back(newMem);<br />
index = urls.size() - 1;<br />
urlHash[newMem] = index;<br />
}<br />
return index;<br />
}<br />
</pre></div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=Talk:Link_transit&diff=4950Talk:Link transit2004-09-04T04:26:58Z<p>Tim Starling: clarification</p>
<hr />
<div>I'm not sure if our apache is configured to even log internal clicks. Maybe, maybe not. Anyways I don't currently have any software to report or analyze this information. If you know of GPL or otherwise free tools for digging this information from [[httpd log]]s pls post here --[[User:Juxo|Juxo]] 16:47, 1 Sep 2004 (EEST)<br />
<br />
:This tends to be expensive software run by [[ad server]] companies. But it is certainly in use in all [[publicly traded search engine]]s like [[Yahoo]] and [[Google]], in fact, you can see the "imgurl" they use to track say which queries led to which image lookups.<br />
<br />
----<br />
<br />
I wrote up a basic program to perform this kind of analysis on log files, but I'm not sure why you think it would be useful for either contributors or Bomis. It's certainly not a commonly requested feature. Wouldn't view count data be more useful than link transit data? <br />
<br />
:Both are useful for the same reasons. And both are not available. "Server load" is a lousy excuse, when [[Wikimedia]] could raise all the money it needed for hardware with an [[independent board]].<br />
<br />
::You say both are useful for the same reasons. What reasons are those? -- [[User:Tim Starling|Tim Starling]] 07:13, 4 Sep 2004 (EEST)<br />
<br />
This matters because I need to know what the output format should be, and I need to have some way to justify using server resources to generate such data. <br />
<br />
:A map of nodes/pages with the number of [[link transit]]s on each edge, such edges representing a link, is the obvious display. But that would be huge so one must be able to filter down to a very small number of pages and links that hold them together, typically the most heavily clustered / deeply connected to each other. [[Xerox PARC]] did some research on this about twenty years ago.<br />
<br />
Anyway, following is the result of a couple of hours of procrastination. -- [[User:Tim Starling|Tim Starling]] 11:26, 3 Sep 2004 (EEST)<br />
<br />
:It looks like C to me and it looks like that the Main() takes standard httpd.log as input. I'll run this on our logs sometime when I have the time. Kinda busy now. --[[User:Juxo|Juxo]] 13:02, 3 Sep 2004 (EEST)<br />
<br />
::It's C++. It outputs two sections separated by a double linefeed. The first is an indexed list of URLs. The second has three values on each line: index from, index to and the transit count. The idea is that you would read all this into a relational database with an index on all three columns, then perform whatever analysis you need to perform. -- [[User:Tim Starling|Tim Starling]] 07:13, 4 Sep 2004 (EEST)<br />
<br />
<pre><br />
#include <string><br />
#include <iostream><br />
#include <vector><br />
#include <map><br />
using namespace std;<br />
<br />
#define LINE_BUF_SIZE 1000<br />
#define REPORTING_INTERVAL 10000<br />
<br />
int getUrlIndex(char* s);<br />
<br />
class char_order<br />
{<br />
public:<br />
bool operator()(char* s1, char* s2) <br />
{<br />
return strcmp(s1, s2) < 0;<br />
}<br />
};<br />
<br />
typedef map<char*, int, char_order> char_map;<br />
<br />
typedef char_map::iterator hash_iterator;<br />
typedef vector<map<int, int> >::iterator vectormap_outer_iterator;<br />
typedef map<int, int>::iterator vectormap_inner_iterator;<br />
<br />
vector<map<int, int> > outbound;<br />
vector<char*> urls;<br />
char_map urlHash;<br />
<br />
int main(int argc, char** argv) {<br />
FILE* file;<br />
if (argc == 1) {<br />
file = stdin;<br />
} else if (argc == 2) {<br />
file = fopen(argv[1], "r");<br />
if (!file) {<br />
printf("Can't open file %s\n", argv[1]);<br />
return 1;<br />
}<br />
} else {<br />
printf("Incorrect argument count\n");<br />
return 1;<br />
}<br />
<br />
<br />
char buffer[LINE_BUF_SIZE];<br />
int numLines = 0;<br />
<br />
while (!feof(file)) {<br />
numLines = (numLines+1)%REPORTING_INTERVAL;<br />
if (numLines == 0) {<br />
fprintf(stderr, ".");<br />
fflush(stderr);<br />
}<br />
if (!fgets(buffer, LINE_BUF_SIZE-1, file)) {<br />
break;<br />
}<br />
<br />
// Find start of quoted method/URL string<br />
char* method = strchr(buffer, '"');<br />
if (!method) {<br />
continue;<br />
}<br />
method++;<br />
<br />
// Find end of method, and start of URL<br />
char* url = strchr(method, ' ');<br />
if (!url) {<br />
continue;<br />
}<br />
*url = '\0';<br />
url++;<br />
<br />
// Find end of URL<br />
char* referrer = strchr(url, ' ');<br />
if (!url) {<br />
continue;<br />
}<br />
*referrer = '\0';<br />
referrer++;<br />
<br />
// If URL does not contain "wiki", skip<br />
if (strstr(url,"/wiki/") == NULL) {<br />
continue;<br />
}<br />
<br />
// Find start of referrer<br />
referrer = strstr(referrer, " \"");<br />
if (!referrer) {<br />
continue;<br />
}<br />
referrer += 2;<br />
<br />
// Find end of referrer<br />
char* end = strchr(referrer, '"');<br />
if (!end) {<br />
continue;<br />
}<br />
*end = '\0';<br />
<br />
// Obtain indexes<br />
int from = getUrlIndex(referrer);<br />
int to = getUrlIndex(url);<br />
<br />
// Add to matrix<br />
if (outbound.size() < from+1) {<br />
outbound.resize(from+1);<br />
}<br />
outbound[from][to]++;<br />
}<br />
<br />
// Output URLs<br />
int numUrls = urls.size();<br />
for (int i=0; i<numUrls; i++) {<br />
printf("%d\t%s\n", i, urls[i]);<br />
delete[] urls[i];<br />
}<br />
printf("\n");<br />
for (int i=0; i<outbound.size(); i++) {<br />
map<int,int> & row = outbound[i]; <br />
for (vectormap_inner_iterator j=row.begin(); j!=row.end(); j++) {<br />
printf("%d\t%d\t%d\n", i, j->first, j->second);<br />
}<br />
}<br />
return 0;<br />
}<br />
<br />
<br />
int getUrlIndex(char* s)<br />
{<br />
int index;<br />
hash_iterator iter = urlHash.find(s);<br />
if (iter != urlHash.end()) {<br />
index = iter->second;<br />
} else {<br />
// Copy string to the heap<br />
int length = strlen(s)+1;<br />
char* newMem = new char[length];<br />
memcpy(newMem, s, length);<br />
<br />
// Add to the containers<br />
urls.push_back(newMem);<br />
index = urls.size() - 1;<br />
urlHash[newMem] = index;<br />
}<br />
return index;<br />
}<br />
</pre></div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=Talk:Link_transit&diff=4946Talk:Link transit2004-09-04T04:13:45Z<p>Tim Starling: </p>
<hr />
<div>I'm not sure if our apache is configured to even log internal clicks. Maybe, maybe not. Anyways I don't currently have any software to report or analyze this information. If you know of GPL or otherwise free tools for digging this information from [[httpd log]]s pls post here --[[User:Juxo|Juxo]] 16:47, 1 Sep 2004 (EEST)<br />
<br />
:This tends to be expensive software run by [[ad server]] companies. But it is certainly in use in all [[publicly traded search engine]]s like [[Yahoo]] and [[Google]], in fact, you can see the "imgurl" they use to track say which queries led to which image lookups.<br />
<br />
----<br />
<br />
I wrote up a basic program to perform this kind of analysis on log files, but I'm not sure why you think it would be useful for either contributors or Bomis. It's certainly not a commonly requested feature. Wouldn't view count data be more useful than link transit data? <br />
<br />
:Both are useful for the same reasons. And both are not available. "Server load" is a lousy excuse, when [[Wikimedia]] could raise all the money it needed for hardware with an [[independent board]].<br />
<br />
::What reasons are those? -- [[User:Tim Starling|Tim Starling]] 07:13, 4 Sep 2004 (EEST)<br />
<br />
This matters because I need to know what the output format should be, and I need to have some way to justify using server resources to generate such data. <br />
<br />
:A map of nodes/pages with the number of [[link transit]]s on each edge, such edges representing a link, is the obvious display. But that would be huge so one must be able to filter down to a very small number of pages and links that hold them together, typically the most heavily clustered / deeply connected to each other. [[Xerox PARC]] did some research on this about twenty years ago.<br />
<br />
Anyway, following is the result of a couple of hours of procrastination. -- [[User:Tim Starling|Tim Starling]] 11:26, 3 Sep 2004 (EEST)<br />
<br />
:It looks like C to me and it looks like that the Main() takes standard httpd.log as input. I'll run this on our logs sometime when I have the time. Kinda busy now. --[[User:Juxo|Juxo]] 13:02, 3 Sep 2004 (EEST)<br />
<br />
::It's C++. It outputs two sections separated by a double linefeed. The first is an indexed list of URLs. The second has three values on each line: index from, index to and the transit count. The idea is that you would read all this into a relational database with an index on all three columns, then perform whatever analysis you need to perform. -- [[User:Tim Starling|Tim Starling]] 07:13, 4 Sep 2004 (EEST)<br />
<br />
<pre><br />
#include <string><br />
#include <iostream><br />
#include <vector><br />
#include <map><br />
using namespace std;<br />
<br />
#define LINE_BUF_SIZE 1000<br />
#define REPORTING_INTERVAL 10000<br />
<br />
int getUrlIndex(char* s);<br />
<br />
class char_order<br />
{<br />
public:<br />
bool operator()(char* s1, char* s2) <br />
{<br />
return strcmp(s1, s2) < 0;<br />
}<br />
};<br />
<br />
typedef map<char*, int, char_order> char_map;<br />
<br />
typedef char_map::iterator hash_iterator;<br />
typedef vector<map<int, int> >::iterator vectormap_outer_iterator;<br />
typedef map<int, int>::iterator vectormap_inner_iterator;<br />
<br />
vector<map<int, int> > outbound;<br />
vector<char*> urls;<br />
char_map urlHash;<br />
<br />
int main(int argc, char** argv) {<br />
FILE* file;<br />
if (argc == 1) {<br />
file = stdin;<br />
} else if (argc == 2) {<br />
file = fopen(argv[1], "r");<br />
if (!file) {<br />
printf("Can't open file %s\n", argv[1]);<br />
return 1;<br />
}<br />
} else {<br />
printf("Incorrect argument count\n");<br />
return 1;<br />
}<br />
<br />
<br />
char buffer[LINE_BUF_SIZE];<br />
int numLines = 0;<br />
<br />
while (!feof(file)) {<br />
numLines = (numLines+1)%REPORTING_INTERVAL;<br />
if (numLines == 0) {<br />
fprintf(stderr, ".");<br />
fflush(stderr);<br />
}<br />
if (!fgets(buffer, LINE_BUF_SIZE-1, file)) {<br />
break;<br />
}<br />
<br />
// Find start of quoted method/URL string<br />
char* method = strchr(buffer, '"');<br />
if (!method) {<br />
continue;<br />
}<br />
method++;<br />
<br />
// Find end of method, and start of URL<br />
char* url = strchr(method, ' ');<br />
if (!url) {<br />
continue;<br />
}<br />
*url = '\0';<br />
url++;<br />
<br />
// Find end of URL<br />
char* referrer = strchr(url, ' ');<br />
if (!url) {<br />
continue;<br />
}<br />
*referrer = '\0';<br />
referrer++;<br />
<br />
// If URL does not contain "wiki", skip<br />
if (strstr(url,"/wiki/") == NULL) {<br />
continue;<br />
}<br />
<br />
// Find start of referrer<br />
referrer = strstr(referrer, " \"");<br />
if (!referrer) {<br />
continue;<br />
}<br />
referrer += 2;<br />
<br />
// Find end of referrer<br />
char* end = strchr(referrer, '"');<br />
if (!end) {<br />
continue;<br />
}<br />
*end = '\0';<br />
<br />
// Obtain indexes<br />
int from = getUrlIndex(referrer);<br />
int to = getUrlIndex(url);<br />
<br />
// Add to matrix<br />
if (outbound.size() < from+1) {<br />
outbound.resize(from+1);<br />
}<br />
outbound[from][to]++;<br />
}<br />
<br />
// Output URLs<br />
int numUrls = urls.size();<br />
for (int i=0; i<numUrls; i++) {<br />
printf("%d\t%s\n", i, urls[i]);<br />
delete[] urls[i];<br />
}<br />
printf("\n");<br />
for (int i=0; i<outbound.size(); i++) {<br />
map<int,int> & row = outbound[i]; <br />
for (vectormap_inner_iterator j=row.begin(); j!=row.end(); j++) {<br />
printf("%d\t%d\t%d\n", i, j->first, j->second);<br />
}<br />
}<br />
return 0;<br />
}<br />
<br />
<br />
int getUrlIndex(char* s)<br />
{<br />
int index;<br />
hash_iterator iter = urlHash.find(s);<br />
if (iter != urlHash.end()) {<br />
index = iter->second;<br />
} else {<br />
// Copy string to the heap<br />
int length = strlen(s)+1;<br />
char* newMem = new char[length];<br />
memcpy(newMem, s, length);<br />
<br />
// Add to the containers<br />
urls.push_back(newMem);<br />
index = urls.size() - 1;<br />
urlHash[newMem] = index;<br />
}<br />
return index;<br />
}<br />
</pre></div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=Talk:Link_transit&diff=4916Talk:Link transit2004-09-03T08:26:21Z<p>Tim Starling: </p>
<hr />
<div>I'm not sure if our apache is configured to even log internal clicks. Maybe, maybe not. Anyways I don't currently have any software to report or analyze this information. If you know of GPL or otherwise free tools for digging this information from [[httpd log]]s pls post here --[[User:Juxo|Juxo]] 16:47, 1 Sep 2004 (EEST)<br />
<br />
:This tends to be expensive software run by [[ad server]] companies. But it is certainly in use in all [[publicly traded search engine]]s like [[Yahoo]] and [[Google]], in fact, you can see the "imgurl" they use to track say which queries led to which image lookups.<br />
<br />
----<br />
<br />
I wrote up a basic program to perform this kind of analysis on log files, but I'm not sure why you think it would be useful for either contributors or Bomis. It's certainly not a commonly requested feature. Wouldn't view count data be more useful than link transit data? This matters because I need to know what the output format should be, and I need to have some way to justify using server resources to generate such data. <br />
<br />
Anyway, following is the result of a couple of hours of procrastination. -- [[User:Tim Starling|Tim Starling]] 11:26, 3 Sep 2004 (EEST)<br />
<br />
<pre><br />
#include <string><br />
#include <iostream><br />
#include <vector><br />
#include <map><br />
using namespace std;<br />
<br />
#define LINE_BUF_SIZE 1000<br />
#define REPORTING_INTERVAL 10000<br />
<br />
int getUrlIndex(char* s);<br />
<br />
class char_order<br />
{<br />
public:<br />
bool operator()(char* s1, char* s2) <br />
{<br />
return strcmp(s1, s2) < 0;<br />
}<br />
};<br />
<br />
typedef map<char*, int, char_order> char_map;<br />
<br />
typedef char_map::iterator hash_iterator;<br />
typedef vector<map<int, int> >::iterator vectormap_outer_iterator;<br />
typedef map<int, int>::iterator vectormap_inner_iterator;<br />
<br />
vector<map<int, int> > outbound;<br />
vector<char*> urls;<br />
char_map urlHash;<br />
<br />
int main(int argc, char** argv) {<br />
FILE* file;<br />
if (argc == 1) {<br />
file = stdin;<br />
} else if (argc == 2) {<br />
file = fopen(argv[1], "r");<br />
if (!file) {<br />
printf("Can't open file %s\n", argv[1]);<br />
}<br />
} else {<br />
printf("Incorrect argument count\n");<br />
return 1;<br />
}<br />
<br />
<br />
char buffer[LINE_BUF_SIZE];<br />
int numLines = 0;<br />
<br />
while (!feof(file)) {<br />
numLines = (numLines+1)%REPORTING_INTERVAL;<br />
if (numLines == 0) {<br />
fprintf(stderr, ".");<br />
fflush(stderr);<br />
}<br />
if (!fgets(buffer, LINE_BUF_SIZE-1, file)) {<br />
break;<br />
}<br />
<br />
// Find start of quoted method/URL string<br />
char* method = strchr(buffer, '"');<br />
if (!method) {<br />
continue;<br />
}<br />
method++;<br />
<br />
// Find end of method, and start of URL<br />
char* url = strchr(method, ' ');<br />
if (!url) {<br />
continue;<br />
}<br />
*url = '\0';<br />
url++;<br />
<br />
// Find end of URL<br />
char* referrer = strchr(url, ' ');<br />
if (!url) {<br />
continue;<br />
}<br />
*referrer = '\0';<br />
referrer++;<br />
<br />
// If URL does not contain "wiki", skip<br />
if (strstr(url,"/wiki/") == NULL) {<br />
continue;<br />
}<br />
<br />
// Find start of referrer<br />
referrer = strstr(referrer, " \"");<br />
if (!referrer) {<br />
continue;<br />
}<br />
referrer += 2;<br />
<br />
// Find end of referrer<br />
char* end = strchr(referrer, '"');<br />
if (!end) {<br />
continue;<br />
}<br />
*end = '\0';<br />
<br />
// Obtain indexes<br />
int from = getUrlIndex(referrer);<br />
int to = getUrlIndex(url);<br />
<br />
// Add to matrix<br />
if (outbound.size() < from+1) {<br />
outbound.resize(from+1);<br />
}<br />
outbound[from][to]++;<br />
}<br />
<br />
// Output URLs<br />
int numUrls = urls.size();<br />
for (int i=0; i<numUrls; i++) {<br />
printf("%d\t%s\n", i, urls[i]);<br />
delete[] urls[i];<br />
}<br />
printf("\n");<br />
for (int i=0; i<outbound.size(); i++) {<br />
map<int,int> & row = outbound[i]; <br />
for (vectormap_inner_iterator j=row.begin(); j!=row.end(); j++) {<br />
printf("%d\t%d\t%d\n", i, j->first, j->second);<br />
}<br />
}<br />
return 0;<br />
}<br />
<br />
<br />
int getUrlIndex(char* s)<br />
{<br />
int index;<br />
hash_iterator iter = urlHash.find(s);<br />
if (iter != urlHash.end()) {<br />
index = iter->second;<br />
} else {<br />
// Copy string to the heap<br />
int length = strlen(s)+1;<br />
char* newMem = new char[length];<br />
memcpy(newMem, s, length);<br />
<br />
// Add to the containers<br />
urls.push_back(newMem);<br />
index = urls.size() - 1;<br />
urlHash[newMem] = index;<br />
}<br />
return index;<br />
}<br />
</pre></div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=User_talk:Juxo/Chat_Gallery/28.08.2004_with_TimStarling&diff=4906User talk:Juxo/Chat Gallery/28.08.2004 with TimStarling2004-09-02T00:00:18Z<p>Tim Starling: </p>
<hr />
<div>Internally it's just implemented as $wgUserTablePrefix, but that's not catchy and aliterative. -- [[User:Tim Starling|Tim Starling]] 18:34, 30 Aug 2004 (EEST)<br />
<br />
:It would be nice if [[API]]s always reflected [[GUI]]s, but there are too many morons coding both to make that reliable. Call it the same thing as you do on the user's screen, i.e. [[single login]], using that word/phrase "login" or "log in" consistently, and there will be no confusion about what you mean or what features are being implied.<br />
<br />
::You can name something as soon as you get off your fat arse and code something. Until then, quit your whinging. -- [[User:Tim Starling|Tim Starling]] 10:45, 31 Aug 2004 (EEST)<br />
<br />
:::Seconds! --[[User:Juxo|Juxo]] 10:46, 31 Aug 2004 (EEST)<br />
<br />
::::Coders do not make [[ontology]] choices in any reasonable project. There is no chance that this can lead to anything but disaster. Which anyone who has done any [[data warehouse]] work knows from firsthand pain. It's the [[user interface designer]]s that actually make the choices about what things are called, in a reasonable project, and the [[management accounting]] categories, e.g. [[styles of capital]], that determine the deeper categorization systems.<br />
<br />
::::Besides [[trolls]] do not listen to [[developer vigilantiism|developer vigilantes]] of no particular talent. Though they will answer [[Lowest Troll]]s they respect, usually. The latest bout of hack-backs on essential articles is a sign however that [[Wikimedia corruption]] may be spreading to this wiki.<br />
<br />
:::::You're not going to listen to me but you expect me to listen to you? Why should I do that? -- [[User:Tim Starling|Tim Starling]] 07:50, 1 Sep 2004 (EEST)<br />
<br />
::::::Because all of us [[trolls]] are wiser than all of you [[developer]]s, mate.<br />
<br />
:::::::ROFLMAO -- [[User:Tim Starling|Tim Starling]] 03:00, 2 Sep 2004 (EEST)<br />
<br />
Ils sont mignons tous les deux ;-) ant (editors translation: They are nice both)</div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=User_talk:Juxo/Chat_Gallery/28.08.2004_with_TimStarling&diff=4873User talk:Juxo/Chat Gallery/28.08.2004 with TimStarling2004-09-01T04:55:28Z<p>Tim Starling: whoa</p>
<hr />
<div>Internally it's just implemented as $wgUserTablePrefix, but that's not catchy and aliterative. -- [[User:Tim Starling|Tim Starling]] 18:34, 30 Aug 2004 (EEST)<br />
<br />
:It would be nice if [[API]]s always reflected [[GUI]]s, but there are too many morons coding both to make that reliable. Call it the same thing as you do on the user's screen, i.e. [[single login]], using that word/phrase "login" or "log in" consistently, and there will be no confusion about what you mean or what features are being implied.<br />
<br />
::You can name something as soon as you get off your fat arse and code something. Until then, quit your whinging. -- [[User:Tim Starling|Tim Starling]] 10:45, 31 Aug 2004 (EEST)<br />
<br />
:::Seconds! --[[User:Juxo|Juxo]] 10:46, 31 Aug 2004 (EEST)<br />
<br />
::::Coders do not make [[ontology]] choices in any reasonable project. There is no chance that this can lead to anything but disaster. Which anyone who has done any [[data warehouse]] work knows from firsthand pain. It's the [[user interface designer]]s that actually make the choices about what things are called, in a reasonable project, and the [[management accounting]] categories, e.g. [[styles of capital]], that determine the deeper categorization systems.<br />
<br />
::::Besides [[trolls]] do not listen to [[developer vigilantiism|developer vigilantes]] of no particular talent. Though they will answer [[Lowest Troll]]s they respect, usually. The latest bout of hack-backs on essential articles is a sign however that [[Wikimedia corruption]] may be spreading to this wiki.<br />
<br />
:::::You're not going to listen to me but you expect me to listen to you? Why should I do that? -- [[User:Tim Starling|Tim Starling]] 07:50, 1 Sep 2004 (EEST)</div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=User_talk:Juxo/Chat_Gallery/28.08.2004_with_TimStarling&diff=4872User talk:Juxo/Chat Gallery/28.08.2004 with TimStarling2004-09-01T04:50:48Z<p>Tim Starling: reply</p>
<hr />
<div>Internally it's just implemented as $wgUserTablePrefix, but that's not catchy and aliterative. -- [[User:Tim Starling|Tim Starling]] 18:34, 30 Aug 2004 (EEST)<br />
<br />
:It would be nice if [[API]]s always reflected [[GUI]]s, but there are too many morons coding both to make that reliable. Call it the same thing as you do on the user's screen, i.e. [[single login]], using that word/phrase "login" or "log in" consistently, and there will be no confusion about what you mean or what features are being implied.<br />
<br />
::You can name something as soon as you get off your fat arse and code something. Until then, quit your whinging. -- [[User:Tim Starling|Tim Starling]] 10:45, 31 Aug 2004 (EEST)<br />
<br />
:::Seconds! --[[User:Juxo|Juxo]] 10:46, 31 Aug 2004 (EEST)<br />
<br />
::::Coders do not make [[ontology]] choices in any reasonable project. There is no chance that this can lead to anything but disaster. Which anyone who has done any [[data warehouse]] work knows from firsthand pain. It's the [[user interface designer]]s that actually make the choices about what things are called, in a reasonable project, and the [[management accounting]] categories, e.g. [[styles of capital]], that determine the deeper categorization systems.<br />
<br />
::::Besides [[trolls]] do not listen to [[developer vigilantiism|developer vigilantes]] of no particular talent. Though they will answer [[Lowest Troll]]s they respect, usually. The latest bout of hack-backs on essential articles is a sign however that [[Wikimedia corruption]] may be spreading to this wiki.<br />
<br />
:::::You're not going to listen to me but you expect me to listen to you? Why should I do that? I guess I should be awed by the intellectual presence of a man capable of inventing new meanings for a word like "ontology". -- [[User:Tim Starling|Tim Starling]] 07:50, 1 Sep 2004 (EEST)</div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=User_talk:Juxo/Chat_Gallery/28.08.2004_with_TimStarling&diff=4850User talk:Juxo/Chat Gallery/28.08.2004 with TimStarling2004-08-31T07:45:48Z<p>Tim Starling: reply</p>
<hr />
<div>Internally it's just implemented as $wgUserTablePrefix, but that's not catchy and aliterative. -- [[User:Tim Starling|Tim Starling]] 18:34, 30 Aug 2004 (EEST)<br />
<br />
:It would be nice if [[API]]s always reflected [[GUI]]s, but there are too many morons coding both to make that reliable. Call it the same thing as you do on the user's screen, i.e. [[single login]], using that word/phrase "login" or "log in" consistently, and there will be no confusion about what you mean or what features are being implied.<br />
<br />
::You can name something as soon as you get off your fat arse and code something. Until then, quit your whinging. -- [[User:Tim Starling|Tim Starling]] 10:45, 31 Aug 2004 (EEST)</div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=User_talk:Juxo/Chat_Gallery/28.08.2004_with_TimStarling&diff=4827User talk:Juxo/Chat Gallery/28.08.2004 with TimStarling2004-08-30T15:34:41Z<p>Tim Starling: $wgUserTablePrefix</p>
<hr />
<div>People who write code that uses the phrase [[log in]] right in the [[user interface]] and then refer to the same feature as "sign on" are just stupid. If they can't think clearly enough to use the same name for the same thing all the time, they can't think clearly enough to code it properly either.<br />
<br />
:"Single sign-on" is iirc the established CS term for this kind of functionality. That propably explains the lapse in naming. --[[User:Juxo|Juxo]] 12:29, 29 Aug 2004 (EEST)<br />
<br />
::Never heard it before. But in any case if they call it login in mediawiki this has to be called single login, since it's a mediawiki feature, right?<br />
<br />
::Try getting sloppy on names with a compiler.<br />
<br />
:::Internally it's just implemented as $wgUserTablePrefix, but that's not catchy and aliterative. -- [[User:Tim Starling|Tim Starling]] 18:34, 30 Aug 2004 (EEST)</div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=Board_vote_code&diff=4247Board vote code2004-07-01T02:56:36Z<p>Tim Starling: reply</p>
<hr />
<div>The '''board vote code''' in [[MediaWiki]] uses [[public key crypto]] to enable an [[audit trail]]. No other [[wiki code]] has this feature yet. An exchange between the [[Lowest Troll]] of [[Consumerium]] and the [[developer vigilantiism|chief developer vigilante]] of [[Wikipedia]] included the following:<br />
<br />
:[[User:Juxo]] is "very critical of submitting everything to computers because that gives technocrats too much (secret) power, but in this case there is no prob since the private key could be on a disk in a safe <br />
::[[Tim Starling]] responds "yes, I wanted to make it so that it was hard for a developer to rig since a developer was one of the candidates and he came very close to winning, too"<br />
<br />
This is probably a reference to [[Erik Moeller]] who, along with Starling, does most of the [[developer vigilantiism]] and distributing [[vandalbot]] code to those willing to do [[denial of service attack]] against other [[GFDL corpus access provider]]s. By making it "very hard for a developer to rig" it becomes possible only for these two individuals, Moeller and Starling, to rig the votes. ''See [[Disinfopedia]] entries on [http://www.disinfopedia.org/wiki.phtml?title=Diebold Diebold Election Systems Corporation] for more on the various issues with vote-rigging and why electronic voting is usually bad.''<br />
<br />
According to [[Tim Starling]], "it's also made so that's it's fairly difficult for a developer to work out who is voting for whom; they'd have to constantly run a monitoring program on the server, which is detectable; instead of just get in, grab the results and cover their tracks". Use of terms like ''fairly'' difficult and ''detectable'' and ''cover their tracks'' implies of course that Starling himself can actually do these things, and ensure they are covered up. In the future this might be of benefit to his friend and ally [[Erik Moeller]] who is a strong opponent of [[English Wikipedia User Anthere]], who won the so-called "election" this time - probably just to make everything look honest?<br />
<br />
Individual records in plain text have three lines: two for the two different positions, and one for "salt" (though the [[GPG algorithm]] adds its own salt). "The plain text records don't contain the usernames, it's not trivial for even the private key holder to determine who voted for who assuming the private key holder doesn't have access to the full DB," which of course some do. Apparently the SQL DB hides the order in which votes were cast, which is why to track votes one would need to monitor every transaction as it occurred.<br />
<br />
The web interface gives two different data displays :<br />
*the "list", where names and timestamps are displayed and where an election administrator may use this list to strike out invalid votes, e.g. of [[trolls]] who oppose the [[sysop power structure]]<br />
*the "dump", which lists the encrypted records in chronological order <br />
<br />
Starling admits that "there's a few ways an election administrator could match up dump entries with list entries" such as to match records in "dump" with timestamps", and muses about shuffling them so that only the most expert of the [[developer vigilantiism|vote riggers]] could do this trace very reliably to identify political enemies for future harassment by the [[echo chamber]]s.<br />
<br />
"An election administrator with the private key could find out which entry belongs to which person by striking it out temporarily, and seeing which dump entry disappears" according to Starling, "so the way to get around that vulnerability is to make the private key holder a non-administrator" but given the power of the [[sysop power structure]] that person can be made to comply with more or less any demand to assert or impose the [[Cabal point of view]].<br />
<br />
Like all [[e-voting]] mechanisms, this one is certainly open to rigging and spying at least by its own developers. There is probably no way around that.<br />
<br />
----<br />
<br />
LOL!!! That's the funniest thing I've read in ages. You think I'm a friend and ally of Erik, and I wanted to help him win the election against Anthere??? I've made no secret of my dislike for Erik. He's arrogant and overbearing. He's done a few things to piss me off in the past and I'm still bearing a grudge. I voted for everyone ''except'' him on the contributing ballot. By contrast I have a great deal of respect for Anthere.<br />
<br />
:That's your story. It could be a [[cover story]]. He is certainly your ally in [[developer vigilantiism]] (huge [[IP range block]]s affecting whole cities simply to prevent challenge to the [[Sysop Vandal point of view]]) though he is prone to [[libel]] and so far you are not. He is certainly arrogant, overbearing, and self-certain. He's a vile little creep! But he wants the same type of top-down control as [[Daniel Mayer]] does, with these people of no particular consequence making critical decisions about who participates, trying to suppress the [[Wikipedia Red Faction]], and just making ordinary stupid decisions with incoherent unlogic like [[Auntie Angela]] (who ''was'' "elected"). --[[142.177.X.X]]<br />
<br />
I had two personal reasons for making the voting system hard for developers to rig: firstly out of distrust for Erik, and secondly because I was entertaining visions of being a candidate myself. It takes a lot of care to design a voting system such that nobody could reasonably claim that even its designer could rig it. <br />
<br />
:Yes it does. But surely you comprehend that any voting system must be analyzed from a strictly hostile, suspicious point of view with all possible [[Wikimedia corruption|corruption]]s considered. Any slack or benefit of the doubt whatoever and it will be exploited. Really the only test of an evoting system is for one group to fully control its deployment and then totally lose: this happened recently in India to the BJP whose pet voting machine company installed [[e-voting]] all over India, and then Congress Party got elected! That is the only proof of honesty: the clique being entirely locked out. And sorry, the final results prove that the clique was far from locked out. The only person who is actually not a vile [[sysop vandal|vandal]] or [[vile mailing list|spreader of lies]], barely got in, and she's literally the only one on the top six who did. Next time it's the clique all the way. --[[142.177.X.X]]<br />
<br />
This is made possible by displaying the encrypted election records. When someone votes, their election record both in plain text and in encrypted form is displayed to them. They may then check to make sure it appears on the dump. If it spontaenously disappears, then they can raise the alarm bells. A developer could rig it so that a different dump is displayed to the general public than to the private key holder, but the private key holder could check for this by requesting copies of the dump downloaded by other people. <br />
<br />
:Only a tiny number of people know how to do this kind of [[audit]]. As with [[vandalbot]] code, there are extreme technical barriers to understanding it - meaning insiders always have an edge. [[E-voting]] is inherently untrustworthy. --[[142.177.X.X]]<br />
<br />
::Actually this procedure was quite clearly explained to me by the board vote code. It stated that you may download a copy of this plain text and that encrypted to later on check that the encrypted version is still included in the "dump". As simple as that. As far as I understand computer science I must say that meticulous detail has been has been put into this fine piece of code by Tim Staring --[[User:Juxo|Juxo]] 11:50, 29 Jun 2004 (EEST)<br />
<br />
Any paranoid member of the general community can check for disappearing vote records by regularly downloading the entire dump and comparing new dumps and old dumps side by side. Voting records will indeed disappear from the dump due to the election administrator striking out invalid votes, or when someone votes twice. But if such removals are challenged, they can be checked for legitimacy by a third party examining the log.<br />
<br />
:So write up an audit protocol that an ordinary IQ 100 no-programming-skill user can carry out, to determine by spot audits if everything always matches. --[[142.177.X.X]]<br />
<br />
::It already exits --[[User:Juxo|Juxo]] 11:50, 29 Jun 2004 (EEST)<br />
<br />
An improvement to this system would be to sign encrypted election records with a secret key stored on the server. With the current system, if someone's vote disappears, the administration could conceivably claim that they are making up the story. If they have a signed record to prove that they did actually vote, it means that either the votes were tampered with or that the claimant hacked into the server and obtained the private key. Either case should be sufficient cause to declare the election invalid. <br />
<br />
:Good point, this would render the system near untamperable beyond reasonable doubt --[[User:Juxo|Juxo]] 13:49, 27 Jun 2004 (EEST)<br />
<br />
Secrecy, that is preventing anyone from discovering who voted for who, is also very important. My original idea was to preserve secrecy except from the private key holder. I later realised that simply leaving the username off the encrypted records would discourage casual snooping by the private key holder. It also makes it harder for a developer to breach secrecy by reading the temporary files input to GPG. I made no effort to prevent a determined private key holder from working out who voted for who, although this may be possible in principle.<br />
<br />
:[[w:political privacy]] is another matter entirely - some think it should not exist. Only real communities making decisions of real importance probably need truly and totally secret ballots. This would be lower priority: --[[142.177.X.X]]<br />
<br />
A developer may breach secrecy in several ways, such as installing a packet sniffer, or modifying the voting code such that unencrypted votes are logged. However these methods are detectable, and difficult enough so that casual snooping is impossible. Dectability adds an element of risk for a developer wanting to breach secrecy. Note that for breaches of secrecy to be detected, there must be a vigilant non-corrupt person with root access to the servers.<br />
<br />
:This "vigilant non-corrupt person with root access to the servers" probably does not exist. [[User:Brion]] maybe. He has not participated in [[echo chamber]]s or [[developer vigilantiism]]. But there will not always be such a trusted person in that role. --[[142.177.X.X]]<br />
<br />
Wikipedia has a diverse group of developers with root access. Others wishing to use a similar voting system may not be so lucky. In such cases, it may be better to use an external company to provide the web hosting, and to allow only a trusted neutral person access to that machine, or to allow a diverse group of people access, for oversight. -- [[User:Tim Starling|Tim Starling]] 10:44, 27 Jun 2004 (EEST)<br />
<br />
:Well you are thinking correctly but narrowly about the basic problems of the voting protocol. You might have fun over at [http://www.civicactions.org civicactions.org] detailing some of this in the context of the US elections. --[[142.177.X.X]]<br />
<br />
:The real problem is of course "who gets to vote" - no matter what their contributions and no matter how correct or eloquent they are, [[trolls]] do not by definition give [[Wikimedia]] money to oppress them, so, they do not vote in this corporate system [[Bomis]] has set up to continue [[Wikimedia corruption]] of the [[GFDL corpus]], and to lie to [[GFDL corpus access provider]]s about what is a [[GFDL violation]]. Since the whole purpose of [[Wikimedia]] is lies, it does not seem that it would necessarily be morally wrong for liars and vote-riggers to run it. - [[obvious troll]]s --[[142.177.X.X]]<br />
<br />
-----<br />
<br />
:''That's your story. It could be a [[cover story]]. He is certainly your ally in [[developer vigilantiism]] (huge [[IP range block]]s affecting whole cities simply to prevent challenge to the [[Sysop Vandal point of view]]) though he is prone to [[libel]] and so far you are not.'' --[[142.177.X.X]]<br />
<br />
Look, [name deleted], just because two people hate you doesn't mean they are co-conspirators. Erik and I are quite different in most respects, however we share the ability (along with most humans) to spot an asshole when we see one. The reason you are harassed wherever you go is because you actively work to make people angry, not because the people you attack are all part of a vast conspiracy to suppress what you have to say. -- [[User:Tim Starling|Tim Starling]] 05:03, 29 Jun 2004 (EEST)<br />
<br />
::To say "you actively work to make people angry" is to practice [[amateur psychiatry]]. To attach names to anonymous parties is probably [[libel]] and is at best unwise (and deleted to keep the [[Consumerium Governance Organization]] out of trouble). To assert "the reason" is to claim [[God's Eye view]]. And "not because the people you attack are all part of a vast conspiracy" is probably more applicable to this theory that all the [[trolls]] are one person; <br />
<br />
:::It's my opinion, idiot. Not amateur psychiatry or a pronouncement of absolute truth from a "God's Eye view". You ascribe motives to me, I ascribe motives to you. This is not psychiatry, just ordinary human interaction. -- [[User:Tim Starling|Tim Starling]] 05:56, 1 Jul 2004 (EEST)<br />
<br />
::What makes Tim Starling and Erik Moeller the same? amateur psychiatry, libel, God's Eye view, and assumption that [[alleged and collective identity]] can be somehow determined by their own personal emotions, which are very very damaged.<br />
<br />
:::You accuse me of "amateur psychiatry" and go on to say that my personal emotions are "very very damaged"? Well that's an interesting diagnosis, Dr. Hubley. Your hypocrisy knows no bounds. You claim you are not the same person as EoT? Not everyone surfs the Internet with a souped up Commodore 64, you know. -- [[User:Tim Starling|Tim Starling]] 05:56, 1 Jul 2004 (EEST) <br />
<br />
::If you fail to have even this degree of self-reflection, you are just stupid. This is of course no surprise to the [[trolls]], who will eventually eliminate you from any position of trust or responsibility in any serious project.<br />
<br />
:::Your delusions of grandeur are extraordinary. You really think you're going to lead an empire of trolls who will control every serious project in the world? -- [[User:Tim Starling|Tim Starling]] 05:56, 1 Jul 2004 (EEST)</div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=Talk:142.177.X.X&diff=5118Talk:142.177.X.X2004-07-01T02:33:01Z<p>Tim Starling: rv</p>
<hr />
<div></div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=MediaWiki_approval_voting_code&diff=4214MediaWiki approval voting code2004-06-29T14:50:32Z<p>Tim Starling: did I say that?</p>
<hr />
<div>[[MediaWiki]] currently lacks approval voting code with general applicability. It does have approval voting code specifically designed for the [[Wikimedia Board of Trustees]] election, see [[Board vote code]]. However everything except the candidates names is hard-coded, and there can only be one vote conducted on a wiki at any one time. General approval voting code requires:<br />
<br />
* The ability to conduct more than one vote at a time<br />
* The ability for users to start votes<br />
* Flexibility in voting rules including:<br />
** Who can vote<br />
** Administration<br />
** Ending dates or conditions</div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=MediaWiki_approval_voting_code&diff=4213MediaWiki approval voting code2004-06-29T14:48:37Z<p>Tim Starling: many things to do</p>
<hr />
<div>[[MediaWiki]] currently lacks approval voting code with general applicability. It does have approval voting code specifically designed for the [[Wikimedia Board of Trustees]] election, see [[Board vote code]]. However everything except the candidates names is hard-coded, and there can only be one vote conducted on a wiki at any one time. General approval voting code requires:<br />
<br />
* The ability to conduct more than one vote at a time<br />
* The ability for users to start votes<br />
* Flexibility in voting rules including:<br />
** Candidature<br />
** Administration<br />
** Ending dates or conditions</div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=Board_vote_code&diff=4209Board vote code2004-06-29T02:03:21Z<p>Tim Starling: just because two people hate you doesn't mean they are co-conspirators</p>
<hr />
<div>The '''board vote code''' in [[MediaWiki]] uses [[public key crypto]] to enable an [[audit trail]]. No other [[wiki code]] has this feature yet. An exchange between the [[Lowest Troll]] of [[Consumerium]] and the [[developer vigilantiism|chief developer vigilante]] of [[Wikipedia]] included the following:<br />
<br />
:[[User:Juxo]] is "very critical of submitting everything to computers because that gives technocrats too much (secret) power, but in this case there is no prob since the private key could be on a disk in a safe <br />
::[[Tim Starling]] responds "yes, I wanted to make it so that it was hard for a developer to rig since a developer was one of the candidates and he came very close to winning, too"<br />
<br />
This is probably a reference to [[Erik Moeller]] who, along with Starling, does most of the [[developer vigilantiism]] and distributing [[vandalbot]] code to those willing to do [[denial of service attack]] against other [[GFDL corpus access provider]]s. By making it "very hard for a developer to rig" it becomes possible only for these two individuals, Moeller and Starling, to rig the votes. ''See [[Disinfopedia]] entries on [http://www.disinfopedia.org/wiki.phtml?title=Diebold Diebold Election Systems Corporation] for more on the various issues with vote-rigging and why electronic voting is usually bad.''<br />
<br />
According to [[Tim Starling]], "it's also made so that's it's fairly difficult for a developer to work out who is voting for whom; they'd have to constantly run a monitoring program on the server, which is detectable; instead of just get in, grab the results and cover their tracks". Use of terms like ''fairly'' difficult and ''detectable'' and ''cover their tracks'' implies of course that Starling himself can actually do these things, and ensure they are covered up. In the future this might be of benefit to his friend and ally [[Erik Moeller]] who is a strong opponent of [[English Wikipedia User Anthere]], who won the so-called "election" this time - probably just to make everything look honest?<br />
<br />
Individual records in plain text have three lines: two for the two different positions, and one for "salt" (though the [[GPG algorithm]] adds its own salt). "The plain text records don't contain the usernames, it's not trivial for even the private key holder to determine who voted for who assuming the private key holder doesn't have access to the full DB," which of course some do. Apparently the SQL DB hides the order in which votes were cast, which is why to track votes one would need to monitor every transaction as it occurred.<br />
<br />
The web interface gives two different data displays :<br />
*the "list", where names and timestamps are displayed and where an election administrator may use this list to strike out invalid votes, e.g. of [[trolls]] who oppose the [[sysop power structure]]<br />
*the "dump", which lists the encrypted records in chronological order <br />
<br />
Starling admits that "there's a few ways an election administrator could match up dump entries with list entries" such as to match records in "dump" with timestamps", and muses about shuffling them so that only the most expert of the [[developer vigilantiism|vote riggers]] could do this trace very reliably to identify political enemies for future harassment by the [[echo chamber]]s.<br />
<br />
"An election administrator with the private key could find out which entry belongs to which person by striking it out temporarily, and seeing which dump entry disappears" according to Starling, "so the way to get around that vulnerability is to make the private key holder a non-administrator" but given the power of the [[sysop power structure]] that person can be made to comply with more or less any demand to assert or impose the [[Cabal point of view]].<br />
<br />
Like all [[e-voting]] mechanisms, this one is certainly open to rigging and spying at least by its own developers. There is probably no way around that.<br />
<br />
----<br />
<br />
LOL!!! That's the funniest thing I've read in ages. You think I'm a friend and ally of Erik, and I wanted to help him win the election against Anthere??? I've made no secret of my dislike for Erik. He's arrogant and overbearing. He's done a few things to piss me off in the past and I'm still bearing a grudge. I voted for everyone ''except'' him on the contributing ballot. By contrast I have a great deal of respect for Anthere.<br />
<br />
::That's your story. It could be a [[cover story]]. He is certainly your ally in [[developer vigilantiism]] (huge [[IP range block]]s affecting whole cities simply to prevent challenge to the [[Sysop Vandal point of view]]) though he is prone to [[libel]] and so far you are not. He is certainly arrogant, overbearing, and self-certain. He's a vile little creep! But he wants the same type of top-down control as [[Daniel Mayer]] does, with these people of no particular consequence making critical decisions about who participates, trying to suppress the [[Wikipedia Red Faction]], and just making ordinary stupid decisions with incoherent unlogic like [[Auntie Angela]] (who ''was'' "elected").<br />
<br />
I had two personal reasons for making the voting system hard for developers to rig: firstly out of distrust for Erik, and secondly because I was entertaining visions of being a candidate myself. It takes a lot of care to design a voting system such that nobody could reasonably claim that even its designer could rig it. <br />
<br />
::Yes it does. But surely you comprehend that any voting system must be analyzed from a strictly hostile, suspicious point of view with all possible [[Wikimedia corruption|corruption]]s considered. Any slack or benefit of the doubt whatoever and it will be exploited. Really the only test of an evoting system is for one group to fully control its deployment and then totally lose: this happened recently in India to the BJP whose pet voting machine company installed [[e-voting]] all over India, and then Congress Party got elected! That is the only proof of honesty: the clique being entirely locked out. And sorry, the final results prove that the clique was far from locked out. The only person who is actually not a vile [[sysop vandal|vandal]] or [[vile mailing list|spreader of lies]], barely got in, and she's literally the only one on the top six who did. Next time it's the clique all the way.<br />
<br />
This is made possible by displaying the encrypted election records. When someone votes, their election record both in plain text and in encrypted form is displayed to them. They may then check to make sure it appears on the dump. If it spontaenously disappears, then they can raise the alarm bells. A developer could rig it so that a different dump is displayed to the general public than to the private key holder, but the private key holder could check for this by requesting copies of the dump downloaded by other people. <br />
<br />
::Only a tiny number of people know how to do this kind of [[audit]]. As with [[vandalbot]] code, there are extreme technical barriers to understanding it - meaning insiders always have an edge. [[E-voting]] is inherently untrustworthy.<br />
<br />
Any paranoid member of the general community can check for disappearing vote records by regularly downloading the entire dump and comparing new dumps and old dumps side by side. Voting records will indeed disappear from the dump due to the election administrator striking out invalid votes, or when someone votes twice. But if such removals are challenged, they can be checked for legitimacy by a third party examining the log.<br />
<br />
::So write up an audit protocol that an ordinary IQ 100 no-programming-skill user can carry out, to determine by spot audits if everything always matches.<br />
<br />
An improvement to this system would be to sign encrypted election records with a secret key stored on the server. With the current system, if someone's vote disappears, the administration could conceivably claim that they are making up the story. If they have a signed record to prove that they did actually vote, it means that either the votes were tampered with or that the claimant hacked into the server and obtained the private key. Either case should be sufficient cause to declare the election invalid. <br />
<br />
:Good point, this would render the system near untamperable beyond reasonable doubt --[[User:Juxo|Juxo]] 13:49, 27 Jun 2004 (EEST)<br />
<br />
Secrecy, that is preventing anyone from discovering who voted for who, is also very important. My original idea was to preserve secrecy except from the private key holder. I later realised that simply leaving the username off the encrypted records would discourage casual snooping by the private key holder. It also makes it harder for a developer to breach secrecy by reading the temporary files input to GPG. I made no effort to prevent a determined private key holder from working out who voted for who, although this may be possible in principle.<br />
<br />
::[[w:political privacy]] is another matter entirely - some think it should not exist. Only real communities making decisions of real importance probably need truly and totally secret ballots. This would be lower priority:<br />
<br />
A developer may breach secrecy in several ways, such as installing a packet sniffer, or modifying the voting code such that unencrypted votes are logged. However these methods are detectable, and difficult enough so that casual snooping is impossible. Dectability adds an element of risk for a developer wanting to breach secrecy. Note that for breaches of secrecy to be detected, there must be a vigilant non-corrupt person with root access to the servers.<br />
<br />
::This "vigilant non-corrupt person with root access to the servers" probably does not exist. [[Brion Vibber]] maybe. He has not participated in [[echo chamber]]s or [[developer vigilantiism]]. But there will not always be such a trusted person in that role.<br />
<br />
Wikipedia has a diverse group of developers with root access. Others wishing to use a similar voting system may not be so lucky. In such cases, it may be better to use an external company to provide the web hosting, and to allow only a trusted neutral person access to that machine, or to allow a diverse group of people access, for oversight. -- [[User:Tim Starling|Tim Starling]] 10:44, 27 Jun 2004 (EEST)<br />
<br />
::Well you are thinking correctly but narrowly about the basic problems of the voting protocol. You might have fun over at [http://www.civicactions.org civicactions.org] detailing some of this in the context of the US elections.<br />
<br />
::The real problem is of course "who gets to vote" - no matter what their contributions and no matter how correct or eloquent they are, [[trolls]] do not by definition give [[Wikimedia]] money to oppress them, so, they do not vote in this corporate system [[Bomis]] has set up to continue [[Wikimedia corruption]] of the [[GFDL corpus]], and to lie to [[GFDL corpus access provider]]s about what is a [[GFDL violation]]. Since the whole purpose of [[Wikimedia]] is lies, it does not seem that it would necessarily be morally wrong for liars and vote-riggers to run it. - [[obvious troll]]s<br />
<br />
-----<br />
<br />
:''That's your story. It could be a [[cover story]]. He is certainly your ally in [[developer vigilantiism]] (huge [[IP range block]]s affecting whole cities simply to prevent challenge to the [[Sysop Vandal point of view]]) though he is prone to [[libel]] and so far you are not.''<br />
<br />
Look, Craig, just because two people hate you doesn't mean they are co-conspirators. Erik and I are quite different in most respects, however we share the ability (along with most humans) to spot an asshole when we see one. The reason you are harassed wherever you go is because you actively work to make people angry, not because the people you attack are all part of a vast conspiracy to suppress what you have to say. -- [[User:Tim Starling|Tim Starling]] 05:03, 29 Jun 2004 (EEST)</div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=Talk:142.177.X.X&diff=4208Talk:142.177.X.X2004-06-29T01:47:25Z<p>Tim Starling: hoom hum</p>
<hr />
<div></div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=Talk:142.177.X.X&diff=4185Talk:142.177.X.X2004-06-27T14:10:36Z<p>Tim Starling: </p>
<hr />
<div></div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=Talk:Board_vote_code&diff=4189Talk:Board vote code2004-06-27T13:46:09Z<p>Tim Starling: bug</p>
<hr />
<div>This article is obviously not from [[SVpov]], i.e. [[Tim Starling]]'s own view.<br />
<br />
[[Vile mailing list]] posts [http://mail.wikipedia.org/pipermail/wikipedia-l/2004-June/015741.html] and [http://mail.wikipedia.org/pipermail/wikipedia-l/2004-June/015743.html] are evidence of failures of the code, that may be evidence of deliberate ditching of votes.<br />
<br />
:I was very annoyed about that. The problem was that Hashar decided our code would be more efficient if we used single quotes everywhere instead of double quotes. So he converted the whole code base and uploaded the new code immediately, and in the process randomly broke features everywhere. The voting feature was broken such that it would get a fatal PHP error on submission, and return a blank page. It did this for everyone who attempted to vote, for maybe a day or two. I was angry, because I knew would damage the credibility of the vote and of my software. This is in my opinion a strong argument in favour of having the vote conducted on a separate secure server with a static code base. <br />
<br />
:The bug was fixed shortly after it was reported, and Danny and Imran decided not to extend the voting period. They did this without influence from me. -- [[User:Tim Starling|Tim Starling]] 16:46, 27 Jun 2004 (EEST)</div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=Board_vote_code&diff=4177Board vote code2004-06-27T07:58:16Z<p>Tim Starling: possible improvement</p>
<hr />
<div>The '''board vote code''' in [[MediaWiki]] uses [[public key crypto]] to enable an [[audit trail]]. No other [[wiki code]] has this feature yet. An exchange between the [[Lowest Troll]] of [[Consumerium]] and the [[developer vigilantiism|chief developer vigilante]] of [[Wikipedia]] included the following:<br />
<br />
:[[User:Juxo]] is "very critical of submitting everything to computers because that gives technocrats too much (secret) power, but in this case there is no prob since the private key could be on a disk in a safe <br />
::[[Tim Starling]] responds "yes, I wanted to make it so that it was hard for a developer to rig since a developer was one of the candidates and he came very close to winning, too"<br />
<br />
This is probably a reference to [[Erik Moeller]] who, along with Starling, does most of the [[developer vigilantiism]] and distributing [[vandalbot]] code to those willing to do [[denial of service attack]] against other [[GFDL corpus access provider]]s. By making it "very hard for a developer to rig" it becomes possible only for these two individuals, Moeller and Starling, to rig the votes. ''See [[Disinfopedia]] entries on [[Diebold]] corporation for more on the various issues with vote-rigging and why electronic voting is usually bad.''<br />
<br />
According to [[Tim Starling]], "it's also made so that's it's fairly difficult for a developer to work out who is voting for whom; they'd have to constantly run a monitoring program on the server, which is detectable; instead of just get in, grab the results and cover their tracks". Use of terms like ''fairly'' difficult and ''detectable'' and ''cover their tracks'' implies of course that Starling himself can actually do these things, and ensure they are covered up. In the future this might be of benefit to his friend and ally [[Erik Moeller]] who is a strong opponent of [[English Wikipedia User Anthere]], who won the so-called "election" this time - probably just to make everything look honest?<br />
<br />
Individual records in plain text have three lines: two for the two different positions, and one for "salt" (though the [[GPG algorithm]] adds its own salt). "The plain text records don't contain the usernames, it's not trivial for even the private key holder to determine who voted for who assuming the private key holder doesn't have access to the full DB," which of course some do. Apparently the SQL DB hides the order in which votes were cast, which is why to track votes one would need to monitor every transaction as it occurred.<br />
<br />
The web interface gives two different data displays :<br />
*the "list", where names and timestamps are displayed and where an election administrator may use this list to strike out invalid votes, e.g. of [[trolls]] who oppose the [[sysop power structure]]<br />
*the "dump", which lists the encrypted records in chronological order <br />
<br />
Starling admits that "there's a few ways an election administrator could match up dump entries with list entries" such as to match records in "dump" with timestamps", and muses about shuffling them so that only the most expert of the [[developer vigilantiism|vote riggers]] could do this trace very reliably to identify political enemies for future harassment by the [[echo chamber]]s.<br />
<br />
"An election administrator with the private key could find out which entry belongs to which person by striking it out temporarily, and seeing which dump entry disappears" according to Starling, "so the way to get around that vulnerability is to make the private key holder a non-administrator" but given the power of the [[sysop power structure]] that person can be made to comply with more or less any demand to assert or impose the [[Cabal point of view]].<br />
<br />
Like all [[e-voting]] mechanisms, this one is certainly open to rigging and spying at least by its own developers. There is probably no way around that.<br />
<br />
----<br />
<br />
LOL!!! That's the funniest thing I've read in ages. You think I'm a friend and ally of Erik, and I wanted to help him win the election against Anthere??? I've made no secret of my dislike for Erik. He's arrogant and overbearing. He's done a few things to piss me off in the past and I'm still bearing a grudge. I voted for everyone ''except'' him on the contributing ballot. By contrast I have a great deal of respect for Anthere.<br />
<br />
I had two personal reasons for making the voting system hard for developers to rig: firstly out of distrust for Erik, and secondly because I was entertaining visions of being a candidate myself. It takes a lot of care to design a voting system such that nobody could reasonably claim that even its designer could rig it. <br />
<br />
This is made possible by displaying the encrypted election records. When someone votes, their election record both in plain text and in encrypted form is displayed to them. They may then check to make sure it appears on the dump. If it spontaenously disappears, then they can raise the alarm bells. A developer could rig it so that a different dump is displayed to the general public than to the private key holder, but the private key holder could check for this by requesting copies of the dump downloaded by other people. <br />
<br />
Any paranoid member of the general community can check for disappearing vote records by regularly downloading the entire dump and comparing new dumps and old dumps side by side. Voting records will indeed disappear from the dump due to the election administrator striking out invalid votes, or when someone votes twice. But if such removals are challenged, they can be checked for legitimacy by a third party examining the log.<br />
<br />
An improvement to this system would be to sign encrypted election records with a secret key stored on the server. With the current system, if someone's vote disappears, the administration could conceivably claim that they are making up the story. If they have a signed record to prove that they did actually vote, it means that either the votes were tampered with or that the claimant hacked into the server and obtained the private key. Either case should be sufficient cause to declare the election invalid. <br />
<br />
Secrecy, that is preventing anyone from discovering who voted for who, is also very important. My original idea was to preserve secrecy except from the private key holder. I later realised that simply leaving the username off the encrypted records would discourage casual snooping by the private key holder. It also makes it harder for a developer to breach secrecy by reading the temporary files input to GPG. I made no effort to prevent a determined private key holder from working out who voted for who, although this may be possible in principle.<br />
<br />
A developer may breach secrecy in several ways, such as installing a packet sniffer, or modifying the voting code such that unencrypted votes are logged. However these methods are detectable, and difficult enough so that casual snooping is impossible. Dectability adds an element of risk for a developer wanting to breach secrecy. Note that for breaches of secrecy to be detected, there must be a vigilant non-corrupt person with root access to the servers. Wikipedia has a diverse group of developers with root access. Others wishing to use a similar voting system may not be so lucky. In such cases, it may be better to use an external company to provide the web hosting, and to allow only a trusted neutral person access to that machine, or to allow a diverse group of people access, for oversight.<br />
<br />
-- [[User:Tim Starling|Tim Starling]] 10:44, 27 Jun 2004 (EEST)</div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=Board_vote_code&diff=4173Board vote code2004-06-27T07:44:06Z<p>Tim Starling: my evil plan to get Erik elected... HA! :-) </p>
<hr />
<div>The '''board vote code''' in [[MediaWiki]] uses [[public key crypto]] to enable an [[audit trail]]. No other [[wiki code]] has this feature yet. An exchange between the [[Lowest Troll]] of [[Consumerium]] and the [[developer vigilantiism|chief developer vigilante]] of [[Wikipedia]] included the following:<br />
<br />
:[[User:Juxo]] is "very critical of submitting everything to computers because that gives technocrats too much (secret) power, but in this case there is no prob since the private key could be on a disk in a safe <br />
::[[Tim Starling]] responds "yes, I wanted to make it so that it was hard for a developer to rig since a developer was one of the candidates and he came very close to winning, too"<br />
<br />
This is probably a reference to [[Erik Moeller]] who, along with Starling, does most of the [[developer vigilantiism]] and distributing [[vandalbot]] code to those willing to do [[denial of service attack]] against other [[GFDL corpus access provider]]s. By making it "very hard for a developer to rig" it becomes possible only for these two individuals, Moeller and Starling, to rig the votes. ''See [[Disinfopedia]] entries on [[Diebold]] corporation for more on the various issues with vote-rigging and why electronic voting is usually bad.''<br />
<br />
According to [[Tim Starling]], "it's also made so that's it's fairly difficult for a developer to work out who is voting for whom; they'd have to constantly run a monitoring program on the server, which is detectable; instead of just get in, grab the results and cover their tracks". Use of terms like ''fairly'' difficult and ''detectable'' and ''cover their tracks'' implies of course that Starling himself can actually do these things, and ensure they are covered up. In the future this might be of benefit to his friend and ally [[Erik Moeller]] who is a strong opponent of [[English Wikipedia User Anthere]], who won the so-called "election" this time - probably just to make everything look honest?<br />
<br />
Individual records in plain text have three lines: two for the two different positions, and one for "salt" (though the [[GPG algorithm]] adds its own salt). "The plain text records don't contain the usernames, it's not trivial for even the private key holder to determine who voted for who assuming the private key holder doesn't have access to the full DB," which of course some do. Apparently the SQL DB hides the order in which votes were cast, which is why to track votes one would need to monitor every transaction as it occurred.<br />
<br />
The web interface gives two different data displays :<br />
*the "list", where names and timestamps are displayed and where an election administrator may use this list to strike out invalid votes, e.g. of [[trolls]] who oppose the [[sysop power structure]]<br />
*the "dump", which lists the encrypted records in chronological order <br />
<br />
Starling admits that "there's a few ways an election administrator could match up dump entries with list entries" such as to match records in "dump" with timestamps", and muses about shuffling them so that only the most expert of the [[developer vigilantiism|vote riggers]] could do this trace very reliably to identify political enemies for future harassment by the [[echo chamber]]s.<br />
<br />
"An election administrator with the private key could find out which entry belongs to which person by striking it out temporarily, and seeing which dump entry disappears" according to Starling, "so the way to get around that vulnerability is to make the private key holder a non-administrator" but given the power of the [[sysop power structure]] that person can be made to comply with more or less any demand to assert or impose the [[Cabal point of view]].<br />
<br />
Like all [[e-voting]] mechanisms, this one is certainly open to rigging and spying at least by its own developers. There is probably no way around that.<br />
<br />
----<br />
<br />
LOL!!! That's the funniest thing I've read in ages. You think I'm a friend and ally of Erik, and I wanted to help him win the election against Anthere??? I've made no secret of my dislike for Erik. He's arrogant and overbearing. He's done a few things to piss me off in the past and I'm still bearing a grudge. I voted for everyone ''except'' him on the contributing ballot. By contrast I have a great deal of respect for Anthere.<br />
<br />
I had two personal reasons for making the voting system hard for developers to rig: firstly out of distrust for Erik, and secondly because I was entertaining visions of being a candidate myself. It takes a lot of care to design a voting system such that nobody could reasonably claim that even its designer could rig it. <br />
<br />
This is made possible by displaying the encrypted election records. When someone votes, their election record both in plain text and in encrypted form is displayed to them. They may then check to make sure it appears on the dump. If it spontaenously disappears, then they can raise the alarm bells. A developer could rig it so that a different dump is displayed to the general public than to the private key holder, but the private key holder could check for this by requesting copies of the dump downloaded by other people. <br />
<br />
Any paranoid member of the general community can check for disappearing vote records by regularly downloading the entire dump and comparing new dumps and old dumps side by side. Voting records will indeed disappear from the dump due to the election administrator striking out invalid votes, or when someone votes twice. But if such removals are challenged, they can be checked for legitimacy by a third party examining the log.<br />
<br />
Secrecy, that is preventing anyone from discovering who voted for who, is also very important. My original idea was to preserve secrecy except from the private key holder. I later realised that simply leaving the username off the encrypted records would discourage casual snooping by the private key holder. It also makes it harder for a developer to breach secrecy by reading the temporary files input to GPG. I made no effort to prevent a determined private key holder from working out who voted for who, although this may be possible in principle.<br />
<br />
A developer may breach secrecy in several ways, such as installing a packet sniffer, or modifying the voting code such that unencrypted votes are logged. However these methods are detectable, and difficult enough so that casual snooping is impossible. Dectability adds an element of risk for a developer wanting to breach secrecy. Note that for breaches of secrecy to be detected, there must be a vigilant non-corrupt person with root access to the servers. Wikipedia has a diverse group of developers with root access. Others wishing to use a similar voting system may not be so lucky. In such cases, it may be better to use an external company to provide the web hosting, and to allow only a trusted neutral person access to that machine, or to allow a diverse group of people access, for oversight.<br />
<br />
-- [[User:Tim Starling|Tim Starling]] 10:44, 27 Jun 2004 (EEST)</div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=Talk:GetWiki&diff=14612Talk:GetWiki2004-06-24T13:27:50Z<p>Tim Starling: indenting</p>
<hr />
<div>But is there automatic XML-import and hopefully diff facilities for importing from [[Wikinfo|internet-encyclopedia]] or [[wikipedia]] or any number and choice of [[GetWiki]] or [[MediaWiki]] running [[wikis]]?<br />
<br />
:This is a good reason to support a [[m:]] = "multiple", and [[n:]] = "neutral" and [[f:]] = "factional" name space. We select the best of each behind the scenes, rather than hardwiring assumptions about that into the article itself. Why not? We already point to [[w:]] articles which are ever-changing... so why not ever-changing redirects?<br />
<br />
::I'm not understanding what you are getting at with these namespaces. Please elaborate.<br />
<br />
:::See [[interwiki link standard]] first. When that works, we will want some abbreviations. If I think [[Wikinfo]] has the best [[multiple point of view]] in English then [[m:]] should go to [[en:Wikinfo:]], but if [[Wikipedia]] has the only article in French then [[m:]] should go to [[fr:Wikipedia:]], etc.. We need a way to make faction-specific (maybe even individualized) interwiki links. Note this only works if article titles are identical or there is very disciplined use of redirects for all possible titles (which is best anyway).<br />
<br />
:''We should copy and rewrite the articles we refer to in the [[Meta-Wikipedia]] space, as these are not in general relevant to [[Consumerium]] or even to [[mediawiki]]. Rewrites could help get some concepts clearer, and link them to the issues here.''<br />
<br />
We certainly need to get [[GetWiki]] to run a test-wiki to see how it is different and maybe use [[Wikinfo]] or use [[Wikipedia]] as a user [[preference]].<br />
<br />
:A good first step - the user could set where [[w:]] points for their purposes. Why not ask the GetWiki developers for this kind of feature? I'm sure they'll see the need for it. There should be arbitrary code possible to figure out where [[m:]], [[w:]], [[f:]], [[n:]] go. Maybe there already is?<br />
<br />
:Hmm... no point asking [[M.R.M._Parrott]] anything. Better to find people of a rational and [[troll-friendly]]/progressive bent to do yet another fork that [[Consumerium]] can share with [[Recyclopedia]]. There is really no point in trying to collaborate with one's ideological opposites on software projects. Mr. Parrott is a Kantian Platonist, and he further seems to be a narcissist, if one is to practice the same psychiatry on him that he applies to others. Not a reliable partner for anything. [[Trolls]] are done with him. He has seriously failed the test.<br />
<br />
::To set where m, w, f or n points to, use the interwiki table. For example:<br />
<br />
:::<tt>REPLACE INTO interwiki (iw_prefix,iw_url,iw_local) VALUES ('w','http://wikinfo.org/wiki.php?title=$1',0)</tt><br />
<br />
::Then restart memcached if you are using it. Interwiki links were moved from includes/Interwiki.php to the database in August 2003. If you have a version of MediaWiki from before then, you need to edit that file rather than the database. -- [[User:Tim Starling|Tim Starling]] 16:26, 24 Jun 2004 (EEST)<br />
<br />
----<br />
<br />
'''[[GetWiki]]: [[critical point of view]]''' is that the software is new, has some bugs, notably in making an [[interwiki link standard]] difficult to express (which [[MediaWiki]] does not) which may be deliberate sabotage for commercial purposes. However its primary drawback is its primary developer, [[M.R.M. Parrott]]. Here is a sample of his [[sysop vandalism]] on [[Wikinfo]]: <br />
<br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 03:27 . . Proteus (Talk) (deleted "Standard wiki URI": vandalism) <br />
* (diff) (hist) . . GetWiki talk:Corpus; 03:23 . . Proteus (Talk) (restored link) <br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 03:19 . . Proteus (Talk) (deleted "Interwiki identity standard": vandalism) <br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 03:18 . . Proteus (Talk) (deleted "Wikinfo:Faction": vandalism) <br />
* (diff) (hist) . . Internet Encyclopedia:Block log; 03:18 . . Proteus (Talk) (blocked "142.177.82.159": stop spamming NOW) <br />
* (diff) (hist) . . GetWiki talk:Corpus; 03:14 . . 142.177.82.159 (Talk) (restore deleted names as redirects please, or at least don't object to their restoral as redirects; theory of conduct is not theory of behaviour, technological superiority implies moral inferiority) <br />
* (diff) (hist) . . M GetWiki talk:Corpus; 03:09 . . Proteus (Talk) (updated link) <br />
* (diff) (hist) . .N GetWiki talk:Corpus/InterWiki; 03:09 . . Proteus (Talk) (created page, moved materials) <br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 03:04 . . Proteus (Talk) (deleted "Talk:Standard wiki URI": redundant) <br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 03:03 . . Proteus (Talk) (deleted "Wikitext markup": vandalism) <br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 03:03 . . Proteus (Talk) (deleted "Wikitext": vandalism) <br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 03:03 . . Proteus (Talk) (deleted "Wikitax": vandalism) <br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 03:00 . . Proteus (Talk) (deleted "Standard wiki URI": redundant, moved to GetWiki talk:Corpus/InterWiki) <br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 02:57 . . Proteus (Talk) (deleted "Talk:Faction": vandalism) <br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 02:54 . . Proteus (Talk) (deleted "Wikinfo talk:Faction": redundant) <br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 02:53 . . Proteus (Talk) (deleted "Wikinfo:Faction": redundant, moved to GetWiki talk:Corpus/InterWiki) <br />
* (diff) (hist) . . M Sympathetic point of view; 02:51 . . Proteus (Talk) (reverted) <br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 02:50 . . Proteus (Talk) (deleted "Interwiki identity standard": redundant, moved to GetWiki talk:Corpus/InterWiki) <br />
* (diff) (hist) . . Sympathetic point of view; 02:49 . . 64.112.195.193 (Talk) <br />
(diff) (hist) . . Internet Encyclopedia:Deletion log; 02:48 . . Proteus (Talk) (deleted "Interwiki link standard": redundant, moved to GetWiki talk:Corpus/InterWiki) <br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 02:46 . . Proteus (Talk) (deleted "Wikinfo:Languages": vandalism) <br />
* (diff) (hist) . . M Trolls; 02:25 . . Proteus (Talk) (updated) <br />
* (diff) (hist) . . Trolls; 02:24 . . Proteus (Talk) (updated) <br />
* (diff) (hist) . . Caucasus by Levan Urushadze; 02:23 . . Levan Urushadze (Talk) <br />
* (diff) (hist) . . M Trolls; 02:19 . . Proteus (Talk) (updated) <br />
* (diff) (hist) . . Internet Encyclopedia:Protection log; 02:18 . . Proteus (Talk) (protected Trolls) <br />
* (diff) (hist) . .N Trolls; 02:18 . . Proteus (Talk) (from consumerium (note changes here)) <br />
<br />
The log is unedited. Anyone who believes it is ethical for a sysop to edit a page and then immediately protect it to curtail comment is probably unethical. This is not the kind of person who should be deciding which features to add to Recyclopedia, [[Consumerium:Itself]] or any other [[deliberative democracy]]-based project, or indeed any [[troll-friendly]] project. <br />
<br />
Anyone who thinks discussion of an [[interwiki link standard]] is only relevant to [[GetWiki]] must think there is nothing else managing the GFDL text corpus, like e.g. no mediawiki, which seems to reflect a commercial bias - ironic as he accuses [[trolls]] of [[spamming]] here! Most likely it is his own spamlike assertions that GetWiki will eventually solve the problem that motivates him to make it hard to find or compare any equivalent proposals on other [[large public wiki]]s. In any case breaking links or common article naming that enable standards is unethical. Clearly the obscure GetWiki_talk:Corpus/InterWiki page is not the place anyone will deliberately look for cross-wiki cross-platform discussion of the issue. <br />
<br />
Such an arrogant attitude also assumes that Consumerium or [[Metaweb]] will never produce any software to work with that text either. Beyond that, anyone who deletes files that have standard names on several large public wikis without a redirect, is obviously doing so to censor comment. <br />
<br />
Finally, note the self-righteous tone here. Anyone who believes that their technological superiority implies moral superiority, or that comment on policy is spamming, is either insane, or asserting a clear [[commercial point of view]] for their own gain. <br />
<br />
Mr. Parrott's motives here are transparent. This is a good example of the kind of thing that must be strictly forbidden on Consudev:itself. <br />
<br />
This strongly suggests that [[Consumerium:Itself]] must move off GetWiki as Mr. Parrott is highly unlikely to implement any features useful for more deliberative democracy directed methods, e.g. a [[faction]]. <br />
<br />
Whatever merits GetWiki has, it seems they are outweighed by the developer.</div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=Talk:GetWiki&diff=4106Talk:GetWiki2004-06-24T13:26:23Z<p>Tim Starling: how to make w point to some other location</p>
<hr />
<div>But is there automatic XML-import and hopefully diff facilities for importing from [[Wikinfo|internet-encyclopedia]] or [[wikipedia]] or any number and choice of [[GetWiki]] or [[MediaWiki]] running [[wikis]]?<br />
<br />
:This is a good reason to support a [[m:]] = "multiple", and [[n:]] = "neutral" and [[f:]] = "factional" name space. We select the best of each behind the scenes, rather than hardwiring assumptions about that into the article itself. Why not? We already point to [[w:]] articles which are ever-changing... so why not ever-changing redirects?<br />
<br />
::I'm not understanding what you are getting at with these namespaces. Please elaborate.<br />
<br />
:::See [[interwiki link standard]] first. When that works, we will want some abbreviations. If I think [[Wikinfo]] has the best [[multiple point of view]] in English then [[m:]] should go to [[en:Wikinfo:]], but if [[Wikipedia]] has the only article in French then [[m:]] should go to [[fr:Wikipedia:]], etc.. We need a way to make faction-specific (maybe even individualized) interwiki links. Note this only works if article titles are identical or there is very disciplined use of redirects for all possible titles (which is best anyway).<br />
<br />
:''We should copy and rewrite the articles we refer to in the [[Meta-Wikipedia]] space, as these are not in general relevant to [[Consumerium]] or even to [[mediawiki]]. Rewrites could help get some concepts clearer, and link them to the issues here.''<br />
<br />
We certainly need to get [[GetWiki]] to run a test-wiki to see how it is different and maybe use [[Wikinfo]] or use [[Wikipedia]] as a user [[preference]].<br />
<br />
:A good first step - the user could set where [[w:]] points for their purposes. Why not ask the GetWiki developers for this kind of feature? I'm sure they'll see the need for it. There should be arbitrary code possible to figure out where [[m:]], [[w:]], [[f:]], [[n:]] go. Maybe there already is?<br />
<br />
:Hmm... no point asking [[M.R.M._Parrott]] anything. Better to find people of a rational and [[troll-friendly]]/progressive bent to do yet another fork that [[Consumerium]] can share with [[Recyclopedia]]. There is really no point in trying to collaborate with one's ideological opposites on software projects. Mr. Parrott is a Kantian Platonist, and he further seems to be a narcissist, if one is to practice the same psychiatry on him that he applies to others. Not a reliable partner for anything. [[Trolls]] are done with him. He has seriously failed the test.<br />
<br />
:To set where m, w, f or n points to, use the interwiki table. For example:<br />
<br />
::<tt>REPLACE INTO interwiki (iw_prefix,iw_url,iw_local) VALUES ('w','http://wikinfo.org/wiki.php?title=$1',0)</tt><br />
<br />
:Then restart memcached if you are using it. Interwiki links were moved from includes/Interwiki.php to the database in August 2003. If you have a version of MediaWiki from before then, you need to edit that file rather than the database. -- [[User:Tim Starling|Tim Starling]] 16:26, 24 Jun 2004 (EEST)<br />
<br />
----<br />
<br />
'''[[GetWiki]]: [[critical point of view]]''' is that the software is new, has some bugs, notably in making an [[interwiki link standard]] difficult to express (which [[MediaWiki]] does not) which may be deliberate sabotage for commercial purposes. However its primary drawback is its primary developer, [[M.R.M. Parrott]]. Here is a sample of his [[sysop vandalism]] on [[Wikinfo]]: <br />
<br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 03:27 . . Proteus (Talk) (deleted "Standard wiki URI": vandalism) <br />
* (diff) (hist) . . GetWiki talk:Corpus; 03:23 . . Proteus (Talk) (restored link) <br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 03:19 . . Proteus (Talk) (deleted "Interwiki identity standard": vandalism) <br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 03:18 . . Proteus (Talk) (deleted "Wikinfo:Faction": vandalism) <br />
* (diff) (hist) . . Internet Encyclopedia:Block log; 03:18 . . Proteus (Talk) (blocked "142.177.82.159": stop spamming NOW) <br />
* (diff) (hist) . . GetWiki talk:Corpus; 03:14 . . 142.177.82.159 (Talk) (restore deleted names as redirects please, or at least don't object to their restoral as redirects; theory of conduct is not theory of behaviour, technological superiority implies moral inferiority) <br />
* (diff) (hist) . . M GetWiki talk:Corpus; 03:09 . . Proteus (Talk) (updated link) <br />
* (diff) (hist) . .N GetWiki talk:Corpus/InterWiki; 03:09 . . Proteus (Talk) (created page, moved materials) <br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 03:04 . . Proteus (Talk) (deleted "Talk:Standard wiki URI": redundant) <br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 03:03 . . Proteus (Talk) (deleted "Wikitext markup": vandalism) <br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 03:03 . . Proteus (Talk) (deleted "Wikitext": vandalism) <br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 03:03 . . Proteus (Talk) (deleted "Wikitax": vandalism) <br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 03:00 . . Proteus (Talk) (deleted "Standard wiki URI": redundant, moved to GetWiki talk:Corpus/InterWiki) <br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 02:57 . . Proteus (Talk) (deleted "Talk:Faction": vandalism) <br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 02:54 . . Proteus (Talk) (deleted "Wikinfo talk:Faction": redundant) <br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 02:53 . . Proteus (Talk) (deleted "Wikinfo:Faction": redundant, moved to GetWiki talk:Corpus/InterWiki) <br />
* (diff) (hist) . . M Sympathetic point of view; 02:51 . . Proteus (Talk) (reverted) <br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 02:50 . . Proteus (Talk) (deleted "Interwiki identity standard": redundant, moved to GetWiki talk:Corpus/InterWiki) <br />
* (diff) (hist) . . Sympathetic point of view; 02:49 . . 64.112.195.193 (Talk) <br />
(diff) (hist) . . Internet Encyclopedia:Deletion log; 02:48 . . Proteus (Talk) (deleted "Interwiki link standard": redundant, moved to GetWiki talk:Corpus/InterWiki) <br />
* (diff) (hist) . . Internet Encyclopedia:Deletion log; 02:46 . . Proteus (Talk) (deleted "Wikinfo:Languages": vandalism) <br />
* (diff) (hist) . . M Trolls; 02:25 . . Proteus (Talk) (updated) <br />
* (diff) (hist) . . Trolls; 02:24 . . Proteus (Talk) (updated) <br />
* (diff) (hist) . . Caucasus by Levan Urushadze; 02:23 . . Levan Urushadze (Talk) <br />
* (diff) (hist) . . M Trolls; 02:19 . . Proteus (Talk) (updated) <br />
* (diff) (hist) . . Internet Encyclopedia:Protection log; 02:18 . . Proteus (Talk) (protected Trolls) <br />
* (diff) (hist) . .N Trolls; 02:18 . . Proteus (Talk) (from consumerium (note changes here)) <br />
<br />
The log is unedited. Anyone who believes it is ethical for a sysop to edit a page and then immediately protect it to curtail comment is probably unethical. This is not the kind of person who should be deciding which features to add to Recyclopedia, [[Consumerium:Itself]] or any other [[deliberative democracy]]-based project, or indeed any [[troll-friendly]] project. <br />
<br />
Anyone who thinks discussion of an [[interwiki link standard]] is only relevant to [[GetWiki]] must think there is nothing else managing the GFDL text corpus, like e.g. no mediawiki, which seems to reflect a commercial bias - ironic as he accuses [[trolls]] of [[spamming]] here! Most likely it is his own spamlike assertions that GetWiki will eventually solve the problem that motivates him to make it hard to find or compare any equivalent proposals on other [[large public wiki]]s. In any case breaking links or common article naming that enable standards is unethical. Clearly the obscure GetWiki_talk:Corpus/InterWiki page is not the place anyone will deliberately look for cross-wiki cross-platform discussion of the issue. <br />
<br />
Such an arrogant attitude also assumes that Consumerium or [[Metaweb]] will never produce any software to work with that text either. Beyond that, anyone who deletes files that have standard names on several large public wikis without a redirect, is obviously doing so to censor comment. <br />
<br />
Finally, note the self-righteous tone here. Anyone who believes that their technological superiority implies moral superiority, or that comment on policy is spamming, is either insane, or asserting a clear [[commercial point of view]] for their own gain. <br />
<br />
Mr. Parrott's motives here are transparent. This is a good example of the kind of thing that must be strictly forbidden on Consudev:itself. <br />
<br />
This strongly suggests that [[Consumerium:Itself]] must move off GetWiki as Mr. Parrott is highly unlikely to implement any features useful for more deliberative democracy directed methods, e.g. a [[faction]]. <br />
<br />
Whatever merits GetWiki has, it seems they are outweighed by the developer.</div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=Talk:Export-import&diff=4102Talk:Export-import2004-06-24T10:28:46Z<p>Tim Starling: please cache locally</p>
<hr />
<div>What chance is there that [[MediaWiki]] will *ever* be useful for this purpose? Its developers are only interested in [[Wikipedia]], and in increasing the power of their [[sysop power structure]] by adding on a [[permission-based model]] no one really needs, but which makes [[usurper|them]] feel powerful.<br />
<br />
They haven't even cloned the [[GetWiki]] facility yet, and it's the ideal way to keep those who [[fork off]] the [[GFDL corpus]] as close to the core corpus as possible. That is of course because they are trying to stop all other [[GFDL corpus access provider]]s, and retain [[trademark]] power over the name "wikipedia", which is actually generic. There are many wikipedias, and the contributor is not seeking to enable or give their work to any specific bunch of "[[Wikimedia]]" thugs, they're seeking to give it to all wikipedias.<br />
<br />
We should be more concerned with [[edits, votes and bets]] and how [[answer recommendation]] might move things from [[Research Wiki]] to [[Publish Wiki]]. Importing a lot of sysop-approved biased nonsense from [[Wikipedia]] should be low on our list of priorities. Why not just use [[GetWiki]] and get it from [[Wikinfo]] instead?<br />
<br />
-----<br />
<br />
GetWiki discourages the construction of a proper fork by allowing users to fetch articles from Wikipedia on demand, whenever they access a page which doesn't exist. This means that a large proportion of the content hosted by a GetWiki site is actually controlled by Wikipedians. What's more, Wikimedia would be within its rights to cease service to any GetWiki site, leaving them out in the cold with a useless leech script. Why not just [http://download.wikimedia.org/ download the database] and end your dependence on Wikimedia? -- [[User:Tim Starling|Tim Starling]] 11:50, 23 Jun 2004 (EEST)<br />
<br />
:This is bullshit but it does prove [[Wikimedia]] is a menace to the [[GFDL corpus]]. Wikipedia is not "within its rights to cease service" under some reasonable interpretations of the [[GFDL]]; Since very few [[trolls]] are blocked in both places, the availability of current articles both ways is one way [[Wikimedia]] avoids being called on its frequent [[GFDL violation]]s. It is easy enough to suck the appropriate articles in through various read-only proxies that the [[developer vigilantiism|vigilante]] [[usurper]]s don't know about, and never will know about. They can't track all the tools trolls use.<br />
<br />
:As for "control", so what? The point is that [[GFDL corpus access provider]]s can cooperate, so that anyone else could feed [[Wikinfo]] if [[Wikimedia]] cut it off fascistically. That would put the new feeder in power position, as it could serve any other [[mirror web site]] that [[Wikimedia corruption]] deemed a threat to its monopoly.<br />
<br />
:[[Wikipedia]] unrighteously uses a mass of [[GFDL corpus]] content that was donated "to the GFDL itself" not "to Wikipedia" - no ownership rights were ever ceded to [[Wikimedia]] in particular, and even new contributions are not so deeded. So the rights of those contributors and those who you call "wikipedians" are not the same thing, and attempts to make them the same thing are easy enough to slap down legally. We're watching all your mistakes.<br />
<br />
::We care very deeply about preserving the right to fork and the right to freely redistribute Wikipedia content. However our hardware resources are limited. We couldn't possibly serve someone trying to request hundreds of pages per second, although we'd be happy for them to obtain our content in a more orderly fashion. Similarly, we would prefer it if mirrors and forks would cache content locally rather than fetching it from Wikipedia on every client request. It is not against the GFDL to require that they do so. -- [[User:Tim Starling|Tim Starling]] 13:28, 24 Jun 2004 (EEST)</div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=Talk:Export-import&diff=4096Talk:Export-import2004-06-23T08:50:17Z<p>Tim Starling: GetWiki encourages dependence on Wikimedia, and editorial control by Wikipedians</p>
<hr />
<div>What chance is there that [[MediaWiki]] will *ever* be useful for this purpose? Its developers are only interested in [[Wikipedia]], and in increasing the power of their [[sysop power structure]] by adding on a [[permission-based model]] no one really needs, but which makes [[usurper|them]] feel powerful.<br />
<br />
They haven't even cloned the [[GetWiki]] facility yet, and it's the ideal way to keep those who [[fork off]] the [[GFDL corpus]] as close to the core corpus as possible. That is of course because they are trying to stop all other [[GFDL corpus access provider]]s, and retain [[trademark]] power over the name "wikipedia", which is actually generic. There are many wikipedias, and the contributor is not seeking to enable or give their work to any specific bunch of "[[Wikimedia]]" thugs, they're seeking to give it to all wikipedias.<br />
<br />
We should be more concerned with [[edits, votes and bets]] and how [[answer recommendation]] might move things from [[Research Wiki]] to [[Publish Wiki]]. Importing a lot of sysop-approved biased nonsense from [[Wikipedia]] should be low on our list of priorities. Why not just use [[GetWiki]] and get it from [[Wikinfo]] instead?<br />
<br />
-----<br />
<br />
GetWiki discourages the construction of a proper fork by allowing users to fetch articles from Wikipedia on demand, whenever they access a page which doesn't exist. This means that a large proportion of the content hosted by a GetWiki site is actually controlled by Wikipedians. What's more, Wikimedia would be within its rights to cease service to any GetWiki site, leaving them out in the cold with a useless leech script. Why not just [http://download.wikimedia.org/ download the database] and end your dependence on Wikimedia? -- [[User:Tim Starling|Tim Starling]] 11:50, 23 Jun 2004 (EEST)</div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=Developer_usurpation&diff=2327Developer usurpation2004-01-15T00:54:45Z<p>Tim Starling: </p>
<hr />
<div>Bwahahahah</div>Tim Starlinghttps://develop.consumerium.org/w/index.php?title=Consumerium:Village_pump&diff=1458Consumerium:Village pump2003-10-13T08:49:16Z<p>Tim Starling: MediaWiki-L</p>
<hr />
<div><table align="right"><tr><td> [[Image:Village_pump.jpg]]</tr></td></table><br />
'''Hi and welcome to the Consumerium Village pump.''' <br />
<br />
The purpose of this page is to ask questions about how we should go about things here and anwsering them.<br />
----<br />
This name is not good ihmo. There is nothing to buy and nothing to sell at a village pump. It is just a place to lose the soap. Please don't copy the english wiki for the sake of it.<br />
<br />
:Why not copy the 'pedia, it'll make people coming from [[Wikipedia]] feel much more home quickly.<br />
<br />
My [[Consumerium:Market Place]] was a much better one.<br />
:That name would indicate there is something for sale there. I've been thinking that once we have some preliminary [[XML]] ready, then we have to set up [[Consumerium:Pseudore]] (Consumerium PSEUDo StORE), a sort of game or playground where we can test the functionality of our markup and simulate different situations eg. handling parties who wish to input [[Disinformation]] to undermine the [[Integrity]] and [[Reputation]] of the system. I'll iterate on this once it's time. [[User:Juxo|Juxo]] 14:43 Jun 15, 2003 (EEST)<br />
<br />
Sigh. No one ever listen to a poor House Elf :-(<br />
<br />
----<br />
<br />
Hello consumerists. I'm not sure who's in charge around here, but whoever it is, read this. Everyone else, go away ;)<br />
<br />
I am a Wikipedia developer, and we're trying to get all system administrators who are using our software to join the newly-created MediaWiki-l mailing list. We will use this list to tell you about bugs, new releases, security problems, etc. In particular, there is a minor security flaw which we are working to fix at the moment. We expect the volume of this list to be very low.<br />
<br />
To subscribe, enter your details at:<br />
<br />
http://mail.wikipedia.org/mailman/listinfo/mediawiki-l<br />
<br />
By the way, not everyone calls their Q&A page the village pump. The French Wikipedia has a "bistro" for example. <br />
<br />
-- [[User:Tim Starling|Tim Starling]] 11:49, 13 Oct 2003 (EEST)<br />
<br />
<br />
<br />
<br />
<br />
----<br />
<sub>The logo is from [[w:Wikipedia:Village Pump]]</sub></div>Tim Starling