1. Identify and secure the weakest link.
In my opinion, today's weakest link is software.
2. Practice defense in depth.
Another way of thinking about that is to manage risks with multiple strategies. A good example: When you're trying to figure out who the user is, use multiple forms of identification.
3. Be reluctant to trust.
Be skeptical of software that you buy from software vendors. Here's a technical example: When building a client-server system, make sure the server always pretends that the client software out there is suspect.
4. Remember that hiding secrets is hard.
A lot of people believe that if you compile source code into machine code in bits, no one can read it. That's wrong. People also think that if they don't tell you how their software works, you won't be able to figure it out. That's wrong too.
5. Follow the principle of least privilege.
That means don't give out more privilege than you have to. For example, don't provide access to your entire file system. Provide access to one file. Provide access on a need-to-know basis.
6. Fail and recover securely.
When things fail, sometimes they fail in an insecure fashion. That's bad. A great hacker trick is to make code crash and watch what happens. When your code fails, like when it throws an exception, make sure that exception is handled properly.
The idea is to limit an attacker's ability to do damage. A real-world example is putting chambers in submarines so that if one chamber gets flooded, the whole submarine doesn't sink. The example in software is the "superuser" in Unix systems. When you become the superuser, or "root," in a Unix box, you get all the privilege in the world. That's often a bad model. Better to have what little privilege you need. When designing code, you can design in a modular fashion so that if one module gets compromised, the whole system doesn't.
8. Keep it simple, stupid.
Complex code just tends to be more buggy, for sure. If you take a very simple Web browser and add more and more and more stuff to it--like virtual machines and macro languages and COM components and integration into the OS--the more you add, the more complicated it gets, and the harder it is to make secure.
9. Keep trust to yourself.
Meaning, don't give out more information than you need to. For example, most programs that listen on ports on a computer will happily tell you what version they are, and you look them up in your little book and you can see how hackable they are. Most browsers will tell you which browser release they are, what patch level, what OS they're running on, and all sorts of stuff the Web server doesn't really need to know. There's all this extraneous information flying around the Net, and there shouldn't be.
10. Assume nothing.
Question all assumptions and choices you make when you develop software. It's very hard not to be shortsighted. So it's really good to get other people to assess your design and help you with risk analysis. Something that goes along with this is: Never trust security claims people make. Computer security these days is chock full of snake oil.