One of the most important things any webmaster has to do on a regular basis, other than, you know, master the web, is to advertise his or her website using the best possible methods.
The process of promoting one’s site consists of three fundamental steps:
Step 1: Brainstorm good promotion ideas.
Step 2: Initiate good promotion ideas.
Step 3: Go back to step 1.
At pubcon, Matt Cutts of Google just announced that they will start limiting search results to two subdomains for any one web search. This is a drastic revision, and may very well affect your site, possibly very negatively. Thankfully, the announcement specifically stated that the changes would be rolled out only in the next couple of weeks, so you still have time to prepare.
Good SEO often takes a lot of effort, but some of the most important parts of SEO best practices are actually quite simple.
Making your site validate really isn’t that difficult if you code appropriately. And believe it or not, code that validates is good for SEO. By using semantical markers the way they were meant to be used, search engine spiders will be able to better understand your site, and will then crawl it more efficiently.
But what does this mean, exactly?
Robots Exclusion Standard (also known as robots.txt protocol) is the agreement whereby search engines will not read or index certain content on your site, even though it is freely available for the public at large to view. The way it works is that a
robots.txt file will instruct search engine spiders on which pages you don’t want it to read, and assuming the search engine is acting in good faith, it won’t crawl those pages. Obviously, this is not a reliable way of hiding data; you must have the cooperation of the search engine for it to work, and even pages that aren’t indexed are still available for viewing by anyone with a web browser. Yet it has its uses.
Today’s article is going to be a little controversial, but in a past article I said that was okay, so I’m not worried. The reason today’s article is controversial is because I’m going to talk about URLs. And despite its innocuous name, the differing uses of URLs tend to create huge disagreements in quite knowledgeable people.
URLs are the web addresses you usually see at the top of your browser–it’s basically the pathname of a given internet document. (This article’s url, for example, might be
omnistaretools.com/blog/, or even
omnistaretools.com/blog/ 2007/11/02/should-you-keep-urls-consistent/, since this content is served in multiple locations.) The idea behind URLs is that you can use them to reach specific content at any time. As such, the majority of web developers are in near unanimous agreement that once you put up content at a URL, it should stay at that URL.
Web 2.0 is all about socialization. Whereas media may have ruled in the 1.0 era, now that 2.0 is here, social media gets all the attention. Even the old guard has brought web 2.0 to their sites: major newspapers like the New York Times and The Guardian have blog comments/forums where users can give feedback, and major television news corporations like Fox News actively requests and airs user generated content, such as video of the recent California fires, or quick comments sent off to The O’Reilly Factor.