Internet Marketing Research  
Home Forums SEO Blog Scripts Graphics Marketplace @ V7N Directory About
Menu Information
v7n Forums
Marketing Forum
The SEO Forum
Web Design Forum
The Google Forum
Graphic Design Forum
The Coding Forum
Usability Forum
Web Hosting Forum
Dedicated Hosting Forum
Reseller Hosting Forum

V7N Directory
Web Designers
SEO Consultants
Web Hosting
News and Media
Health and Fitness
Business & Services

Tools of the Trade
Danny Sullivan's SEL

The Basic Concepts of SEO :: by Bob Massa
High Conversion Affiliate Program for Webmasters    Add Your Site Today!


by Bob Massa


There are some basic concepts that all search services have to follow. They all follow certain concepts because they are all driven by humans and humans are governed by certain laws of nature. While there are many things different about spidering engines compared to human reviewed directories, specifically in terms of scalability, there are some very important things that are the same. They are the same because whether it is a human doing the indexing and categorizing or a computer program, such as a spider, the spider was programmed by a human and therefore, it can only do what a human told it to do within the limitations of the technology available at the time. The spider is going to do it's best to emulate what a human would do. It will do whatever it is told to do by a human much faster, but without the skills that are unique to the dominant species.

The moral of the story is, if you want to achieve top placements within an index, whether spider based or human reviewed, you simply have to "think" like a human as opposed to trying to "think" like a computer program. Logic, common sense and human civility towards your fellow man, will win out over any computer language every time. You will get more traffic, (and much, much more importantly -- sales), by accepting that you are dealing with a real person, not that different from yourself, instead of thinking you are just a username and password trying to trick a computer.

That is not to say that spidering engines do not have weaknesses that can be exploited, (same goes for human reviewed directories but more on that a little later). they certainly do. It is only saying that to really "see" those weaknesses for what they really are, and "see" how to best take advantage of them to enable you to achieve your own placement objectives, it is a great help to be able to first understand how silly things like hidden text, re-directs and a lot of other on page goofiness is. Once you accept that you are dealing with a human being, although it may be once removed, it is easy to understand what that human being was likely trying to accomplish when they programmed the spider in the first place. Understanding and accepting that gives you a huge advantage over your competitors and opens a lot of doors into the mind of the person or persons creating the index.

It so happens that I am one of the most successful placement specialists on the planet. I'm not claiming to be "the best" or to be some kind of "guru". I am simply telling you that I have a lot of experience in this field and I have a reputation within the industry for a reason. I really can tell you EXACTLY how to get a number 1 spot on virtually any keyword. I'm willing to bet that there are some reading this even now who can attest to my ability by pointing to their own pages at the top based on something I had addressed. I was able to start doing that by learning and accepting those basic concepts I mentioned earlier.

As long as we are going to open a topic like this, I will help where I can and I believe the best help I can give is to share those basic concepts. What you do with those concepts is up to you. One of my favorite quotes is, "I don't mind telling you where I think the gold is buried but you have to do your own digging".

I have said many times in the past that I accept no responsibility whatsoever if you use any advice I give and it doesn't work. I have no control whatsoever over any action that any search service other than SearchKing may take. I have no inside deal with any service outside of the same PPC deal or trusted feed deals than any one of you could get. So, if you do anything based on what I say and it goes badly, don't blame me!

On the other hand, I have also often said I am more than happy to accept as much credit as you are willing to give if my advice does help. Still, the purpose of my telling you anything that could apply to search service top placement is more in the way of offering some insight into a different perspective rather than just milking a little verbal pat-on-the-back out of someone. I am only relaying my take on things based on my own personal experiences in the hope of motivating grey matter and intelligent discussion, (present author excluded).

As any discussion of techniques used to get to top spots on search services tends to be heated arguments at worst and lengthy, convoluted and self-congratulatory at best, I will try to keep my offerings at a "reasonable", (completely subjective term), length. I will discuss the few concepts I am relatively sure of one at a time and only start another discussion after the one has run its course. So here goes the first one.


Any search engine will always try to return whatever it has available that it thinks is the most relevant data to a specific query based upon ranking criteria set by the specific search service itself. That is the search service's core mission.


Given the time and motivation, the stupidest human will win out over the smartest computer every time.

Did you by any chance see the movie Jurassic Park? In that movie Jeff Goldblum had a line that said:

life will always find a way

That sums up the essence of this concept pretty well.

Every single thing a spidering search engine does has a pattern. It does absolutely nothing without being told to do it for a specific purpose. No matter how many variables exist, no matter how complicated an algorithm may appear, there is a pattern and that pattern can be found by any human willing to look long enough and at enough variables.

If it is too hard for you to understand, then you are 100% correct.

Spiders are nothing more than computer programs and they are dead. They do not get vibes, premonitions or intuitions. They don't get scared, steamed up, mad as #### or lovesick. If they have been programmed correctly, they do not waiver, hesitate or sacrifice for the greater good. They only do EXACTLY what the humans doing the programming told them to do. It is a tool and nothing more.

Once you understand this concept, you have a huge advantage over the machine. You can guess, imagine, deduce and realize what the machine is attempting to do and then second guess its actions and re-actions. You can test theories and make changes based on your findings very quickly while the machine will take considerably more resources to alter its behavior to attempt to counteract any human's behavior.


Spidering engines do not put the good stuff at the top. Instead, they try to put the bad stuff at the bottom.

While this statement, at first glance, may seem no more than a variation based upon semantics, it is in fact, the single most eye-opening revelation I had when I first began targeting top placements within spidering search engine results.

In the beginning, like most people, I was thinking that the best sites got the top spots. I tried to build the "best" site for each topic at hand. After a few failed attempts, it hit home that "best" is a completely subjective term. What is "best" to me, may not be "best" to you. In fact, when it comes to websites that I have built compared to websites that you have built, disagreement as to the definition of best is virtually guaranteed. The same principle applies to the person writing the program telling the spider what to "think".

Once I accepted that defining best was subjective, it became more important to me to find out what the search engine thought was best than to cling to my self-centered view of my own definition. I realized that if I wanted a top placement, what I thought was best didn't matter, only what the search engine thought mattered. This is the concept that got me to start learning how to reverse engineer my competitor's sites.

As I began looking intensely for things that formed a pattern within the top 10 results for any given search, it became apparent that there were indeed several things that appeared to make the search engine think a particular site was "good" for a match for a specific query. The most obvious ones being, (keep in mind we're going back to late 1996 here), keywords in the meta keyword tag, keywords in the title tag and keywords in the meta description tag. As I learned how to look better, keyword density, link structure, H tags and the more obscure things such as meta refreshes were exposed.

After looking at literally thousands of sites over a few months, one thing became very clear. The sites the search engine thought were good enough to be on top, were some of the most gawd awful looking sites on the planet. Stark white backgrounds with colored text and disproportionate H tags making the top 10 sites look more like a ransom note written by a third grade drop out than anything that could be realistically considered a professional sales presentation. There could be little justification from anyone that these top sites in the results were the top sites in terms of being "best" using any kind of measuring stick.

Why? How could the search engine get it so wrong? Then it started becoming clear to me.

You can't write a program to add relevancy points to a document based on emotional responses. You can't tell a program to put a pretty site at the top of the results. You can't tell it to reward ease of navigation because a spider has no concept of easy. So, it's not that all the really cool, advanced, cutting edge, multi-media stuff in web design and functionality were being penalized, just ignored, while things that could be given a mathematical boost were virtually void of any esthetic value.

It is easy to tell a spider to add and subtract for specific data. You can tell it to subtract points if characters are repeated in specific parts of a web document. In other words, you can tell it to penalize a document if the same word appears multiple times in a title tag or a keyword tag. If one word or phrase is over-repeated too many times in visible text. You can tell it to look for specific things like same color text as background but after all these years, the brightest programming minds devoted to search have yet to come up with anything any better than simply ignoring the meta keyword tag all together, just as one example of the limitations of a spider.

On the other hand, you can not add points to a site if it looks good or is easy to use. Or if it sells better than another related site. In other words, those things that most humans would consider necessary before a specific site could be considered as best.

My conclusion was that writing a program to find good stuff and put it at the top of a set of results was much more difficult, (if possible at all), than writing a program to find bad stuff and put that at the bottom.

You may be wondering just what significance, if any, this may have on what you do or how you see your own relationship with spidering engines. All I can do is tell you what impact accepting this concept had on me and hope it at least gives you something else to consider.

To this day you can find SEO, (whatever that is), forums, telling people that if they want to get top placements, all they have to do is build the best site they can, keep adding content and the placements will come. This is simply not the case in my experience. You have to first accept that there are at least two participants in the judging of best. You and the thing doing the judging and it is the thing doing the judging that matters. If your definition of best is different than the engine you are targeting, you lose! A much more accurate piece of advice would be telling people to build a site the target search engine thinks is best and that is not the same thing.

Accepting this concept enabled me to realize that if the objective is to build the best site AND place well on major spidering search engines, then what I thought was best had nothing to do with it. It enabled me to accept that, more often than not, a site that could place at #1 for a search term was not the site that would convert traffic to customers very easily. It enabled me to see beyond the marketing hype of major search services and "see" the frailties and learn how to best deal with them. It taught me to understand setting objectives and strategies with websites and then building them to meet those objectives.

If search engines really could write programs to put the good at the top instead of putting the bad at the bottom, I would likely take a very different approach, but at least for the foreseeable future, they can't. They have to rely on simply reading characters in source codes and adjusting relevancy points accordingly and now, thanks to link popularity, they rely also on humans to tell the spider what is good. That has made it better, but the spider program has a way of not letting human input win because the system still works the same even with links being counted. A program can filter for too many this or too few that, but it still can not reward, or even recognize, good. Therefore, it can not put good at the top, it can only identify what the programmer thinks is bad and put that at the bottom.

Continue on to Part Two of the Basic Concepts of SEO

Brute Force Marketing - Just a Click Away
Need to get your message out there? Advertise Here.

Copyright © 2003 - 2013 Escalate Media LP