Modern problems of search engine optimization. Basic Concepts and Principles of SEO

What tasks does search engine optimization solve?

Search engines are the most important tool for navigating the Internet today. With their help, they search for information on the Web, compare, analyze, ask for advice, look for like-minded people, acquaintances, and even the meaning of life. If earlier directories were the most popular tool for navigating the Internet, today their volume and branching, growing following the increase in the volume of information, have grown so much that they have either become excessively complex for the user or contain very little information. At the same time, the quality of search engines has improved significantly over the past few years. Therefore, it is not surprising that users massively switched to search engines.

By becoming the most popular sites on the Internet, search engines have an added size effect. Now these are not only the most visited sites, but also the most famous ones. As a result, when a user accesses the Internet for the first time, he first goes to the site that he already knows from friends, from the press or from offline advertising, that is, to a search engine.

This state of affairs will continue for a long time, since a significant proportion of users are not very familiar with computers in general, and the number of such users in our country is apparently growing. Not too savvy Internet users use the search line of the machine as the navigation bar of the browser. Many users do not distinguish between the concepts of "Internet" and "search engine" at all. This is clearly seen in the number of search queries containing the site address.

That is why it is so important for many companies optimization site like the process of reaching the top positions in search engine results for the company's target queries. Why is it important to take one of the first places? Because users, on average, view one page of search results and rarely go to sites that only have links on the second page and beyond.

Yes that there the second page! A study of user transitions from Yandex search results, conducted several years ago by SpyLOG and Ashmanov and Partners, showed that the share of transitions that can be expected on the seventh line tends to zero, that is, sites that are below sixth place in the search results , - also "overboard". Above the search results are advertising lines, which become more and more over time. They also reduce the number of clicks that sites get in search results, because for users, these are also search results.

By the way, for North American sites, the share of transitions from search engines from all transitions is on average 60%, and for corporate sites it is even higher. That is, more than half of all visitors receive the average site from a search engine. In the Russian-speaking Internet, the share of search traffic is lower, but still very large and constantly growing.

That is why optimization today is a large, branched service market, where there are significantly more players than in the online advertising market. And the volume of this service market in Russia in 2008 is estimated at $200 million, which is only three times less than the advertising market. And how could it be otherwise, if the effectiveness of this marketing method is in no way lower than other advertising tools on the Internet!

Optimization is a set of techniques and methods that allow you to reach the top positions in search results. All optimization techniques are open and described in numerous forums, specialized optimizer sites and in countless articles. It is very important that there are no "secret" optimization methods. Here everything is transparent and has long been known. Highest value in the speed of obtaining optimization results, has experience as an optimizer, that is, the ability to quickly assess the situation and choose correct methods work, but even a beginner, armed with patience and perseverance, can achieve excellent results.

WHEN BEGIN THE OPTIMIZATION PROCESS, BE CLEARLY AWARE OF THE FOLLOWING POINTS.

1. This is the path of trial and error. Although there are fairly accurate “recipes for success”, in each case they may “not work”, and the probability of their success is lower, the more fellow optimizers work on the same search words and in the same market sector. It is necessary to try all optimization methods to achieve the result.

2. Optimization is a long process. Even if you quickly make changes to the site, the search robot will not update the information about the site in the database immediately, but at best in a few days, or rather even in a week. That's why the optimization process usually drags on for many months and all the results come very gradually.

3. Optimization is a very painstaking process, where many factors must be taken into account: the characteristics of each search engine, the characteristics of the market in which the company operates, the activity of competitors and the actions they take, and so on. Moreover, all these factors must be taken into account constantly, and not just once at launch.

4. Optimization is an unstable process. Search engine algorithms are constantly changing, and the landscape of the market is also changing due to competitors and the optimization actions they take. Therefore, the successes that the company achieved a few days (weeks) ago may turn into nothing today. Consequently, optimization must be done constantly.

5. Search engines resist the efforts of SEOs because they degrade the quality of the search. Rigidly and unilaterally, search engines regulate the acceptable behavior of optimizers and unceremoniously remove from search results ( search results) sites that, in their opinion, do not comply with these rules. Moreover, these rules are not published and are constantly changing, so any optimization action tomorrow may be "outlawed".

Why do search engines struggle with optimizers?

As a result of actions of optimizers search results change. Random results disappear or go down from the search - links to forums, links to long-disappeared pages. From this, the search, undoubtedly, becomes better: its accuracy increases. However, at the same time, completeness, that is, the search engine coverage of various topics related to the search query, sharply decreases in the results. For example, the query “car” includes a whole range of different interests: buying a new or used car, renting, repairing, equipment, spare parts, history, abstract, views, etc., etc. At the same time, search engines are all as one issue either sale (new, used) or car rental. In rare cases, there is also a sale of spare parts. Thus, more than half of the possible interests of users fell out of the search results (on the first few pages), that is, many users do not receive the information they need and will be forced to repeatedly clarify them. Compare the search results for the same word on the Yandex search engine (Figure 5.9) or Google with the results of the Nigma search engine (Figure 5.8) - a machine that clusters search results on different topics - you will see how few different topics come to the first search pages of "big" search engines.

The Internet provides the user with more fast way search for information in comparison with traditional ones. The search for information in the Internet can be carried out using several methods, which differ significantly both in the efficiency and quality of the search, and in the type of information retrieved. Depending on the aims and objectives of the seeker, the methods of information retrieval in the Internet are used individually or in combination with each other.

1. Direct appeal according to IL. The simplest method search, which implies the presence of an address and boils down to a client contacting a server of a certain type, that is, sending a request using a certain protocol.

Typically, this process begins after entering the address in the corresponding line of the browser program or selecting the description of the address in the browser window.

When referring directly to the address, you can use the abbreviation of the standard IL - omit the default elements. For example, omit the name of the protocol (the protocol is selected by the lower-level domain or the default service is taken); omit the default file name (depending on the server configuration) and the last "/" character; omit the server name and use relative directory addressing.

Note that this method is the basis for the operation of more complex technologies, since as a result of complex processes, everything comes down to a direct call to the address IL.

2. Using a set of links. Most of the servers presenting general hypertext materials also offer links to other servers (they contain 1JB addresses of other resources). This way of searching for information is called link set searching. Since all sites in the VWV space are in fact connected, information can be searched by sequentially viewing the linked pages using a browser.

It should be noted that network administrators do not set themselves the goal of placing a complete set of links on the main topics of their server and constantly monitoring their correctness, therefore this search method does not provide completeness and does not guarantee the reliability of obtaining information. Although this fully manual method of searching seems to be an anachronism in a network of more than 60 million nodes, "manual" browsing of Web pages is often the only option in the final stages of information retrieval, when mechanical "digging" gives way to deeper analysis. The use of directories, classified and subject lists, and all sorts of small directories also applies to this type of search.

3. Use of specialized search mechanisms: search engines, resource directories, metasearch, search for people, teleconference addresses, search in file archives, etc.

The main idea of ​​search engines (servers) is to create a database of words found in Magnet documents, in which, for each word, a list of documents containing this word will be stored. The search is carried out in the content of documents. Documents that get into SheteG are registered in search engines with the help of special programs and do not require human intervention. Based on this, we receive complete, but by no means reliable information.

Despite the abundance of words and word forms in natural languages, most of them are used infrequently, which was noticed by the linguist Zipf back in the late 40s. 20th century In addition, the most common words are conjunctions, prepositions and articles, that is, words that are completely useless when searching for information. As a result, the dictionary of the largest search engine, 11d:epe1 DAYAU^a, is only a few gigabytes in size. Since all morphological units in the dictionary are ordered, the search for the desired word can be performed without sequential browsing. The presence of lists of documents in which the search word occurs allows the search engine to perform operations on these lists: their merging, intersection or subtraction.

A query to a search engine can be of two types: simple and complex.

At simple request a word or a set of words not separated by any characters is indicated. In a complex query, words can be separated from each other by logical operators and their combinations. These operators take precedence.

The correctness and quantity of documents issued by the search engine depends on how the request is formulated, whether it is simple or complex.

Many search engines use subject directories for searching or co-exist with them. Therefore, it can be quite difficult to classify search engines. Most of them can be attributed equally to both search engines and classification catalogs.

The most famous search engines include the following: american(AltaVista, Hot Bot, Lycos, Open Text, Mckinley, Excite, Cuiwww); Russians(Yandex, Search, Aport, Tela, Rambler).

Resource directories use a hierarchical (tree-like) and/or network database model, since any resource that has a URL, description, and other information is subject to a certain classification - it is called a classifier. Sections of the classifier are called headings. The library analogue of a catalog is a systematic catalog.

The classifier is developed and improved by a team of authors. Then it is used by another team of specialists called systematizers. Systematizers, knowing the classifier, read documents and assign classification indices to them, indicating which sections of the classifier these documents correspond to.

There are tricks that make it easier to find information using directories. These techniques are referred to as referencing and linking, and both are used by directory makers on the Internet. The above techniques are used in a situation where a document can be assigned to one of several sections of the classifier, and the searcher may not know which section.

Referral is used when the creators of the classifier and systematizers are able to make a clear decision to refer the document to one of the sections of the classifier, and the user in search of this document can refer to another section. Then in this other section is placed a reference (Cm.) to the section of the classifier that actually contains information about documents of this type.

For example, information about maps of countries can be placed in the sections "Science-Geography-Country", "Economics-Geography-Country", "References-Map-Country". It is decided that country maps are placed in the second section "Economy-Geography-Country", and references to it are placed in the remaining two sections. This technique is actively used in Yahoo!.

Link (See also) is used in a less unambiguous situation, when even the creators of the classifier and systematizers are not able to make a clear decision on classifying documents to a specific section of the classifier. It is especially used in directories that use the network database model.

The following classification catalogs are common: European(Yellow Web, Euroseek); american(Yahoo!, Magellan, Infoseek, etc.); Russians(WWW, Stars, Weblist, Rocit, Au).

The advantage of metasearch over search engines and directories is that it provides a single interface or access point to Internet indexes.

There are two types of multiple access tools:

  • 1) multiple access services from their " home pages» provide a menu with a choice of search tools. The popularity of these services is due to the fact that so many search engines are menu-driven. They allow for easy transition from one search engine to another without the need to remember URLs or type them into the browser. Most Popular Multiple Access Services All-in-One(http://www.allonesearch.com); C/Net(http://www.search.com); Internet Sleuth(http://isleuth.com);
  • 2) meta-indexes, often referred to as multi- or integrated search services, provide a single search form in which the user enters search query sent to several search engines at the same time, and the individual results are presented as a single list. This type of service is valuable when a maximum sample of documents on a particular subject is needed and when the document is unique.

Another advantage of the meta index is that the search results of each search engine are quite unique, i.e. the meta index does not return duplicate links.

The main disadvantage of this search engine is that it does not allow the individual properties of the various search engines to be used.

Most Popular Meta Indexes beaucoup(http://www.beacoup.com); Pathfinder(http://www.medialingua.ru/www/wwwsearc.htm).

It should be noted that the division between these two services is very vague. Some of the larger sections offer access to separate search engines as well as meta-index searches.

Until now, the search for mostly hypertext materials has been considered. However, you might as well search for other Internet resources. To do this, there are both specialized search engines (which search only for the same type of resources), and "ordinary" search engines that offer additional features searching for non-hypertext documents.

People search. No single list or directory of addresses Email, just as there is no single printed telephone directory for the whole world. There are several commercial and non-commercial help desks, but most of them include some particular region or discipline. They are compiled various methods and can be assembled by special computer programs from an Internet newsgroup post or launched by individuals who are not necessarily the owners of the addresses. These directories are often referred to as "white pages" and include directories of email and postal addresses, as well as phone numbers. One of the most reliable ways to find information about personal contacts, if you know the organization to which a person belongs, is to go to home page organizations. Another way is to use personal directories.

As a result of use, the search engine should return the URL of the e-mail address (e-mail) of the desired person.

Main personal directories: Who where(http://www.whowhere.com); Yahu people(http://yahoo.com/search/people); Four 11(http://www.four1l.com).

There are not so many specialized search engines that search for conference URLs, in particular, this DejaNews(http://www.dejanews.com is the most sophisticated search engine in newsgroups (Usenet). It is characterized by an abundance of advanced search options, useful filters for “cleaning up” the result, a formal-logical query syntax and the ability to search for files.

Many search engines provide the ability to search for conferences as additional service(Yahoo!, Alta Vista, Anzwers, Galaxy, Info Seek, etc.). You can enter the conference search mode using the Usenet button.

Search in file archives. Internet contains great amount resources. A large part of them are file archives on FTP servers. To search for them, specialized search engines are used. Registration of files occurs with the help of special programs, and file names are indexed.

Some non-specialized search engines also provide the ability to search file archives. For example, typing search.ftp into AltaVista will give us links to servers that specialize in finding files in FTP archives. As a result of use, the search engine should return the URL of the file.

Basic search mechanisms in file archives: Archie(http://archie.de); Filez(http://www.filez.com); FFP Search(http://ftpsearch.city.ru).

The term search engine optimization comes from the English “search engines optimization” – “optimization for search engines”. In recent years, this direction has been increasingly developed in Runet and is one of the main and most effective methods responsible for the success of any Internet project.

Effect- stable high positions in search engines.

Correction of errors in site navigation and editing program code- this is work on the internal factors of the site that affect both the convenience of the site for users and its "friendliness" in relation to search engine robots.

Content growth– adding new pages containing useful information for target visitors.

Placing links on thematic resources differs from the chaotic exchange of links in that links are published only on sites whose visitors may be really interested in the information posted on the pages of your site.

The use of "white hat" optimization, as a rule, leads not only to the promotion of the site in the first positions, but also to an increase in site visitors by several dozen times.

Hello friends! In this article, we will cover the basic concepts and SEO principles affecting the search engines Yandex and Google. So let's go!

Basic Principles of Search Engine Optimization SEO

The importance of guest posting

Not so long ago, there was a real hype about the ban on the use of search engines. She was credited with a spam character. But, nevertheless, guest posting is still very popular among marketers for promoting an Internet resource. The main thing that is required of you is the creation of high-quality content.

Creating a favorable atmosphere on the web resource for users

Whatever SEO activities you conduct, you should remember that the competent design of the site is a guarantee of its attendance. And indeed, no matter what tricks you go to promote a web resource, all this will be meaningless if your site is not interesting to the user. Therefore, it is necessary to create a simple and understandable interface. Only in this case, your site has a chance to take high positions in search engines.

The main element of any web resource is the placement of high-quality content

That's all for today, good luck to everyone and see you soon!

Share