Tuesday, May 19, 2009

SEO Techniques That sShould be on Avoiding

SEO techniques that should be in the interest of avoiding conflict between the artistic and marketing versus the concept of SEO is almost always appear in the development of each website. Mostthe swatch easy way out, but there is also the rather complicated. Even there we are then forced to accept as our weaknesses website, so we need to "sacrifice" more on other things for compensation to its.

Maybe we could sacrifice with copywriting, website content so that we more. Write more articles with a link to our website, so that the website we get more backlink. There are also take a shortcut with applying what is popular as a black-hat SEO. In principle, practice black-hat SEO is lawful right to all this way to get a high ranking, usually done with a "trick" search engines, making search engines consider the website as a source of relevant information and valuable high, so get a reasonable ranking high.

Many online business even with the large-scale practice of deliberately doing this, cause the website to get a higher ranking for targeted keyword, even if the actual content of website is not relevant to that keyword. In general, they are pursuing profits right, so "caught" they are ready with the other website.

If your goal in doing business online is a long-term, Avoid techniques like this. Because even if successful may be, you will only enjoy success in the right course. Immediately after search engine to detect "fraud" that you do, not only decrease the ranking you get as a punishment.

Here are some techniques that are frequently used. I deliberately use the term in English, because it is very popular. At least this can help you through if you want to research online for more its deepens .

Domain Cloaking 

Cross-se you may see this technique as a brilliant idea. The concept is to create two different versions of each page. If the page is accessed regular visitor, the page will display a beautiful and interesting. But if the page is accessible by search engine robots, it will display the contents of after the optimization is for SEO purposes, it may not be very interesting and can not be understood by ordinary visitors. So in short this technique pursuing a high ranking in the SEO with the view that remains interesting for regular visitors.

Technically the main capital is a set of programs that can detect the visitors, whether visitors or search engine robot. If detected that the visit is a search engine robot, will be directed to a page specifically optimized completely. The problem is number of actors, most engine search now already has the ability to detect this practice. There is no grace to enjoy success before "caught". Once detected free not one page but all the website will be in black-list. Not only be read and indexed canceled, but another page from the same website that had been indexed and will be issued. Worse, the website will never be visited again. So even though you have to realize the "caught" and then insyaf and improve your website, it is too late.

Duplicate Content 

In building a website, fill in the form of text from the website often to be a very serious obstacle. If you own is not the author, you may need to remove large amounts of money to pay for the author. If you want to have a website that has a 100 page, then you should write as much as a novel. If you are not capable of Mira W pour his imagination in words, or Hermawan Kartajaya imagination is capable of pouring in the form of papers or books, what would you do?

Many who take the short road to steal the content of the website to another website own website. Aware Bung! Google, Yahoo, and MSN was created in the United States, not in Indonesia or China. Search engine appreciate copyrights. This practice is very dangerous. Furthermore, even if you do not violate the copyright, for example because you get permission from the owner of the website that you take the content, search engines still do not like this practice. Because they do not want to rank search results show that after all visited the same contents.

To ensure this, search engine complete with self detection system is relatively sophisticated. With Among detect "fresh" content. If the robots find pages with the same content has been found that some of the previous, will be selected who have them first. The rest will be considered as a duplicate, the value will be derived or not indexed at all.

General search engines categorize Duplicate content into 4 groups: 

1. Articles with wide distribution. One article can appear in many websites that have related topics. Generally written by people who really want covering writings published broadest. Some examples of these press release, article about certain experiences, etc.. Although the conflict does not contain copyright because the author really want duplicated writings, but never any posts of importance, if it appears hundreds of times in various websites, there is still a possibility and is considered as a possible duplication of the index small.

2. Description of products to shop online. Usually indeed allow producers stores that sell their products online, take the product description from the website, displayed on the website for the online store page. Almost all product description is not indexed by search engines, except that is displayed on the website producers. This is a form of commitment on the search engines. Search engine is designed to "leapfrog" the product description. However, the effect only until there are, the website does not shop online ybs. get "punished" as excluded from the index.

3. Duplicate pages. Without a negative goal, sometimes going duplication page, for example, if Information accessed from the same page. For example if your website contains a page titled The reviewer Linux Book, topics, about contents Linux Operating System, written by David Elboth, published by Prentice Hall. It's likely construction site causes rensensi is about book you can access as a different page if you access it from the list of topics, author, or publisher. Generally this is only cause the search engine to only index the first found.

4. Fill in "thievery." It is the practice where the pages are filled with material taken from another website, generally without the owners permission, perhaps with a little touch of re-appear so different. Search engines have the ability to detect. This practice is very dangerous, not only that page is not indexed, can also cause the overall site is not indexed and not visited again. I suggest not to take this step even if you get permission from the owner of the website contents that you take. The reason is simply: Where's the search engine robots know if you get permission?

Hidden Pages 

That is the page that is optimized for search engines, and only intended to be "seen" by search engines, but "hidden" from the ordinary visitor. Techniques "hide" it can be various. Most easy is to create a link that does not attract visitors from the homepage to the page in question.

If you have one chance and not get "punished" from the search engines, dispose flee. Because only a matter of time before the only search engine to find it

No comments:

Related Posts Plugin for WordPress, Blogger...