How To Create Unique Passwords For Every Site And Remember Them Easily?

In general

It’s an algorithm that could be applied almost everywhere and requires no special skills. First of all, lets define the cons and pros for you so to be able to chose whether you would get the advantage or not.


  • depends on the particular site at some point (changes dependant)
  • requires few seconds for thinking (depends on complexity of your own algorithm)


  • absolutely and undoubtedly unique algorithm. The base is clear, the implementation is on your own.
  • could be extended or tuned if needed – flexibility is everything
  • no need of keystore or Internet connectivity
  • hardly guessable if someone steals one or two of your passes

If you count on the pros, there we go.

The trick

This password algorithm is similar to the ones in the websites. Where you have:

a) domain name

b) keyword of yours

so the website hashes the domain with your keyword and generates a unique string literal to be used as a password value. The problem stands in the unreliability of the websites – you might not have Internet connection, or it could be change, or someone could break through or something.

Our algorithm works that way:

  1. you take some areas of the website/program (domain name, header, title – something not willing to change soon)
  2. you crypt it on your own
  3. you use the final sequence as a password
  4. as a result, every final sequence for each site might look different and hardly guessable

The main issue here is that you count on the site/software to be constant at all. If you rely on the domain name and the owner migrates to another one, you have to use the ‘forgotten password’ link or just remember the last version change. Shitty, but rare case.


Few examples for prove of concept.

Simple Algorithmwe count the domain symbols (length), multiply by 3 and append the number to the reversed website name.

If we take for instance, then our password would be elgoog27 (google reversed and the length 9 multiplied by 3).

When using, there we have recnaleerf57 (if we use only the first part of the domain as algorithm).

Complex Algorithm: we get the second and third letter of the website, increment the first letter alphabetically (‘a’ becomes ‘b’, ‘d’ becomes ‘e’ etc.) and decrement the second. We use the string ‘$xZ’ as a constant after the transformed letters. We append the last 3 symbols of the source code of the main page. At the end we add the length of the subdomain (if any), the main name and the top-level domain.

Taking the same example, here we have: pn$xZpt>63 (‘oo’ with inc(1) and dec(1), then the literal, then the last three symbols of the source, which is script tag and finally 63 (length of ‘google’ and length of ‘com’). Hardly guessable, although that the source code might change at some point (use at your own risk).

If I apply it to my own freelancer blog –, I would have sd$xZml>1063 (‘re’ translated, the constant, the html tag end and the length of ‘freelancer’, ‘peshev’ and ‘net’).

You could define an algorithm of your own, based on your own needs and expectations. The point is that you remember only the steps, the rest is some translation on runtime, when you enter the site.


Your thoughts?