Please sign and share the petition 'Tighten regulation on taking, making and faking explicit images' at Change.org initiated by Helen Mort to the w:Law Commission (England and Wales) to properly update UK laws against synthetic filth. Only name and email required to support, no nationality requirement. See Current and possible laws and their application @ #SSF! wiki for more info on the struggle for laws to protect humans.
I'm not sure at all if it's wise to publicly list the sensitive points which can be used to haul consumerium out of the way for a more productionism orientated system with similar features, but lacking the observance of corporate social responsibility issues.Juxo 17:25 Jun 22, 2003 (EEST)
- Well, there are two attitudes 1. assume that you are a genius who can figure out all this in advance without ever writing it down or working out weak points "security by obscurity" 2. assume that you need to brainstorm about the problems that can be visited on you in a competitive world, by those who seek to subvert, and, that you can best discourage them by working out a robust solution, wit good answers to all the threats and worst cases You have no non-public way to do this, and surely you would rather be notified of potential attacks or competitors this way than by having them suddenly appear, exploiting flaws in the licenses and audit process that you never thought of, or discussed? Same issue as computer security - you can be transparent or try to hide flaws in the hope that this hiding will make you obscure. But if it does, you aren't having any influence. So the winning strategy is probably the more open one.
- Center for Consumer Freedom is the kind of people we are up against. They are smart, and well-funded, and you better believe they do this kind of analysis.
- If you stick to that policy, you doom the project. No insult to you, but you are simply not smart enough to understand all such cases, not wise enough to know which matter when, and require a lot of help to think of how to deal with them, and what policy to take. Especially on complex matters like protocols and licenses where lots of experience and input is required. This is exactly the argument against closed source for security products. Aside from all that, a high-overhead way to submit such cases, that only a few people have the tools to use, guarantees you will not see most cases people think of. Successful projects like Wikipedia have revealed and discussed many m:worst cases and show no sign of being wiped out yet.
- If you stick to that policy, any worst case that trolls think of, will appear somewhere else and be easily found on google as a Consumerium Exploit or some other obvious tag. You will not have write access so you will not be able to block their visibility to others. So your choice is not "hide or publish", your choices is "publish here or lose all control" over the presentation of such worst cases. You may take that as both a threat, and as a statement of inevitability. Trolls are trolls. Consider yourself to be bitten on the leg. We are not helping you create a monopoly you do not deserve (yet), nor are we wasting our time on a stupid and provably-wrong strategy. So, reconsider.
Threats has now been created, some of which may be worst cases. If you see a threat you actually believe in, or can construct a worst case that you really believe could happen out of what you see at threats, add that to these other worst cases.
Stories will help too. They can be either based on worst cases or best cases. The best stories are those that show how a best case becomes a worst case by a small difference in the system's design or governance protocol. Like say the difference between Gus Kouwenhoven getting to grow his business, or his buddies deciding he's a liability, and wasting him, using him to feed the same loggers he is now feeding chimps (cannibalism either way).